Test Report: Hyper-V_Windows 18703

                    
                      817bcb10c8415237264ed1ad2e32746beadbf0a3:2024-04-19:34116
                    
                

Test fail (15/195)

x
+
TestAddons/Setup (215.75s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-586600 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p addons-586600 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: exit status 90 (3m35.6001391s)

                                                
                                                
-- stdout --
	* [addons-586600] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "addons-586600" primary control-plane node in "addons-586600" cluster
	* Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 16:59:38.355641   10836 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0419 16:59:38.357353   10836 out.go:291] Setting OutFile to fd 920 ...
	I0419 16:59:38.358077   10836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 16:59:38.358077   10836 out.go:304] Setting ErrFile to fd 924...
	I0419 16:59:38.358077   10836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 16:59:38.385999   10836 out.go:298] Setting JSON to false
	I0419 16:59:38.394612   10836 start.go:129] hostinfo: {"hostname":"minikube1","uptime":9637,"bootTime":1713561541,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0419 16:59:38.394612   10836 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 16:59:38.399638   10836 out.go:177] * [addons-586600] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0419 16:59:38.403479   10836 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 16:59:38.403479   10836 notify.go:220] Checking for updates...
	I0419 16:59:38.406535   10836 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 16:59:38.407705   10836 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0419 16:59:38.411184   10836 out.go:177]   - MINIKUBE_LOCATION=18703
	I0419 16:59:38.414106   10836 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 16:59:38.416910   10836 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 16:59:44.050519   10836 out.go:177] * Using the hyperv driver based on user configuration
	I0419 16:59:44.054159   10836 start.go:297] selected driver: hyperv
	I0419 16:59:44.054159   10836 start.go:901] validating driver "hyperv" against <nil>
	I0419 16:59:44.054159   10836 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 16:59:44.112421   10836 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 16:59:44.113246   10836 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 16:59:44.114025   10836 cni.go:84] Creating CNI manager for ""
	I0419 16:59:44.114025   10836 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 16:59:44.114025   10836 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 16:59:44.114245   10836 start.go:340] cluster config:
	{Name:addons-586600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-586600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I0419 16:59:44.114245   10836 iso.go:125] acquiring lock: {Name:mk297f2abb67cbbcd36490c866afe693892d0c05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 16:59:44.118390   10836 out.go:177] * Starting "addons-586600" primary control-plane node in "addons-586600" cluster
	I0419 16:59:44.120702   10836 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 16:59:44.121054   10836 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0419 16:59:44.121112   10836 cache.go:56] Caching tarball of preloaded images
	I0419 16:59:44.121112   10836 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0419 16:59:44.121112   10836 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 16:59:44.121737   10836 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-586600\config.json ...
	I0419 16:59:44.122306   10836 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-586600\config.json: {Name:mk3ef7e7ddbcea8576d0f69254ae0a09198d70aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 16:59:44.123425   10836 start.go:360] acquireMachinesLock for addons-586600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 16:59:44.123425   10836 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-586600"
	I0419 16:59:44.123425   10836 start.go:93] Provisioning new machine with config: &{Name:addons-586600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-586600 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 16:59:44.124143   10836 start.go:125] createHost starting for "" (driver="hyperv")
	I0419 16:59:44.125997   10836 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0419 16:59:44.125997   10836 start.go:159] libmachine.API.Create for "addons-586600" (driver="hyperv")
	I0419 16:59:44.125997   10836 client.go:168] LocalClient.Create starting
	I0419 16:59:44.127605   10836 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0419 16:59:44.424238   10836 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0419 16:59:45.146420   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0419 16:59:47.616246   10836 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0419 16:59:47.617060   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 16:59:47.617130   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0419 16:59:49.508042   10836 main.go:141] libmachine: [stdout =====>] : False
	
	I0419 16:59:49.508042   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 16:59:49.508127   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0419 16:59:51.066688   10836 main.go:141] libmachine: [stdout =====>] : True
	
	I0419 16:59:51.066688   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 16:59:51.067579   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0419 16:59:55.194668   10836 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0419 16:59:55.194668   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 16:59:55.197266   10836 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0419 16:59:55.718073   10836 main.go:141] libmachine: Creating SSH key...
	I0419 16:59:55.885687   10836 main.go:141] libmachine: Creating VM...
	I0419 16:59:55.885687   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0419 16:59:58.886922   10836 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0419 16:59:58.886922   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 16:59:58.886922   10836 main.go:141] libmachine: Using switch "Default Switch"
	I0419 16:59:58.886922   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0419 17:00:00.766156   10836 main.go:141] libmachine: [stdout =====>] : True
	
	I0419 17:00:00.766253   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:00.766253   10836 main.go:141] libmachine: Creating VHD
	I0419 17:00:00.766335   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-586600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0419 17:00:04.534279   10836 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-586600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C47C1731-18AA-46AD-A8C3-7B4388B14D3A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0419 17:00:04.535286   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:04.535286   10836 main.go:141] libmachine: Writing magic tar header
	I0419 17:00:04.535345   10836 main.go:141] libmachine: Writing SSH key tar header
	I0419 17:00:04.544986   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-586600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-586600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0419 17:00:07.770700   10836 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:00:07.770700   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:07.770781   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-586600\disk.vhd' -SizeBytes 20000MB
	I0419 17:00:10.313828   10836 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:00:10.313828   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:10.313942   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-586600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-586600' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0419 17:00:14.095052   10836 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-586600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0419 17:00:14.095206   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:14.095268   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-586600 -DynamicMemoryEnabled $false
	I0419 17:00:16.306476   10836 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:00:16.306476   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:16.306476   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-586600 -Count 2
	I0419 17:00:18.465459   10836 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:00:18.465541   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:18.465541   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-586600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-586600\boot2docker.iso'
	I0419 17:00:21.063471   10836 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:00:21.063471   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:21.063471   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-586600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-586600\disk.vhd'
	I0419 17:00:23.750060   10836 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:00:23.750293   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:23.750293   10836 main.go:141] libmachine: Starting VM...
	I0419 17:00:23.750370   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-586600
	I0419 17:00:26.890221   10836 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:00:26.890937   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:26.890971   10836 main.go:141] libmachine: Waiting for host to start...
	I0419 17:00:26.890971   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:00:29.194132   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:00:29.194132   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:29.194132   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:00:31.736064   10836 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:00:31.736335   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:32.743809   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:00:34.934414   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:00:34.934473   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:34.934473   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:00:37.479246   10836 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:00:37.479246   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:38.483523   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:00:40.652224   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:00:40.652224   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:40.653016   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:00:43.161703   10836 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:00:43.161703   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:44.175478   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:00:46.338641   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:00:46.338641   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:46.338641   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:00:48.819669   10836 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:00:48.819737   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:49.833881   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:00:52.047029   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:00:52.047846   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:52.047846   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:00:54.683883   10836 main.go:141] libmachine: [stdout =====>] : 172.19.38.109
	
	I0419 17:00:54.684027   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:54.684120   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:00:56.825149   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:00:56.825205   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:56.825205   10836 machine.go:94] provisionDockerMachine start ...
	I0419 17:00:56.825205   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:00:58.958339   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:00:58.959466   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:00:58.959466   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:01:01.567784   10836 main.go:141] libmachine: [stdout =====>] : 172.19.38.109
	
	I0419 17:01:01.567784   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:01.574791   10836 main.go:141] libmachine: Using SSH client type: native
	I0419 17:01:01.589923   10836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.38.109 22 <nil> <nil>}
	I0419 17:01:01.589923   10836 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 17:01:01.728026   10836 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0419 17:01:01.728255   10836 buildroot.go:166] provisioning hostname "addons-586600"
	I0419 17:01:01.728255   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:01:03.872292   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:01:03.872368   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:03.872368   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:01:06.445624   10836 main.go:141] libmachine: [stdout =====>] : 172.19.38.109
	
	I0419 17:01:06.445624   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:06.453048   10836 main.go:141] libmachine: Using SSH client type: native
	I0419 17:01:06.453201   10836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.38.109 22 <nil> <nil>}
	I0419 17:01:06.453201   10836 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-586600 && echo "addons-586600" | sudo tee /etc/hostname
	I0419 17:01:06.617142   10836 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-586600
	
	I0419 17:01:06.617263   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:01:08.725096   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:01:08.725096   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:08.725096   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:01:11.264643   10836 main.go:141] libmachine: [stdout =====>] : 172.19.38.109
	
	I0419 17:01:11.264643   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:11.271832   10836 main.go:141] libmachine: Using SSH client type: native
	I0419 17:01:11.272553   10836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.38.109 22 <nil> <nil>}
	I0419 17:01:11.272585   10836 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-586600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-586600/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-586600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 17:01:11.436604   10836 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 17:01:11.436723   10836 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0419 17:01:11.436834   10836 buildroot.go:174] setting up certificates
	I0419 17:01:11.436895   10836 provision.go:84] configureAuth start
	I0419 17:01:11.436895   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:01:13.559976   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:01:13.559976   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:13.560057   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:01:16.147495   10836 main.go:141] libmachine: [stdout =====>] : 172.19.38.109
	
	I0419 17:01:16.147495   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:16.148012   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:01:18.247773   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:01:18.247773   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:18.248527   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:01:20.842250   10836 main.go:141] libmachine: [stdout =====>] : 172.19.38.109
	
	I0419 17:01:20.842291   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:20.842413   10836 provision.go:143] copyHostCerts
	I0419 17:01:20.843135   10836 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0419 17:01:20.844609   10836 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0419 17:01:20.845998   10836 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0419 17:01:20.846650   10836 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-586600 san=[127.0.0.1 172.19.38.109 addons-586600 localhost minikube]
	I0419 17:01:21.122023   10836 provision.go:177] copyRemoteCerts
	I0419 17:01:21.135013   10836 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 17:01:21.135013   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:01:23.261364   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:01:23.261453   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:23.261453   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:01:25.809305   10836 main.go:141] libmachine: [stdout =====>] : 172.19.38.109
	
	I0419 17:01:25.809652   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:25.809731   10836 sshutil.go:53] new ssh client: &{IP:172.19.38.109 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-586600\id_rsa Username:docker}
	I0419 17:01:25.929969   10836 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7949445s)
	I0419 17:01:25.929969   10836 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0419 17:01:25.981958   10836 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0419 17:01:26.030370   10836 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0419 17:01:26.082493   10836 provision.go:87] duration metric: took 14.6454356s to configureAuth
	I0419 17:01:26.082535   10836 buildroot.go:189] setting minikube options for container-runtime
	I0419 17:01:26.083257   10836 config.go:182] Loaded profile config "addons-586600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:01:26.083308   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:01:28.222778   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:01:28.222778   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:28.223705   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:01:30.775094   10836 main.go:141] libmachine: [stdout =====>] : 172.19.38.109
	
	I0419 17:01:30.775698   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:30.781624   10836 main.go:141] libmachine: Using SSH client type: native
	I0419 17:01:30.782369   10836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.38.109 22 <nil> <nil>}
	I0419 17:01:30.782369   10836 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0419 17:01:30.938482   10836 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0419 17:01:30.938565   10836 buildroot.go:70] root file system type: tmpfs
	I0419 17:01:30.938785   10836 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0419 17:01:30.938819   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:01:33.023542   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:01:33.024536   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:33.024536   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:01:35.593915   10836 main.go:141] libmachine: [stdout =====>] : 172.19.38.109
	
	I0419 17:01:35.593915   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:35.604672   10836 main.go:141] libmachine: Using SSH client type: native
	I0419 17:01:35.605512   10836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.38.109 22 <nil> <nil>}
	I0419 17:01:35.605512   10836 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0419 17:01:35.773394   10836 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0419 17:01:35.773394   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:01:37.865134   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:01:37.865134   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:37.865591   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:01:40.375666   10836 main.go:141] libmachine: [stdout =====>] : 172.19.38.109
	
	I0419 17:01:40.375666   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:40.382697   10836 main.go:141] libmachine: Using SSH client type: native
	I0419 17:01:40.383243   10836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.38.109 22 <nil> <nil>}
	I0419 17:01:40.383424   10836 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0419 17:01:42.584422   10836 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0419 17:01:42.584977   10836 machine.go:97] duration metric: took 45.7596625s to provisionDockerMachine
	I0419 17:01:42.584977   10836 client.go:171] duration metric: took 1m58.458696s to LocalClient.Create
	I0419 17:01:42.585108   10836 start.go:167] duration metric: took 1m58.4587915s to libmachine.API.Create "addons-586600"
	I0419 17:01:42.585198   10836 start.go:293] postStartSetup for "addons-586600" (driver="hyperv")
	I0419 17:01:42.585198   10836 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 17:01:42.600029   10836 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 17:01:42.600029   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:01:44.721087   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:01:44.721733   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:44.721733   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:01:47.252160   10836 main.go:141] libmachine: [stdout =====>] : 172.19.38.109
	
	I0419 17:01:47.252375   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:47.252375   10836 sshutil.go:53] new ssh client: &{IP:172.19.38.109 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-586600\id_rsa Username:docker}
	I0419 17:01:47.359838   10836 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7597978s)
	I0419 17:01:47.373822   10836 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 17:01:47.380500   10836 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 17:01:47.380636   10836 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0419 17:01:47.381077   10836 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0419 17:01:47.381306   10836 start.go:296] duration metric: took 4.7960968s for postStartSetup
	I0419 17:01:47.384632   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:01:49.514341   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:01:49.514341   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:49.514619   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:01:52.046299   10836 main.go:141] libmachine: [stdout =====>] : 172.19.38.109
	
	I0419 17:01:52.047151   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:52.047388   10836 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-586600\config.json ...
	I0419 17:01:52.050104   10836 start.go:128] duration metric: took 2m7.9256204s to createHost
	I0419 17:01:52.050342   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:01:54.166018   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:01:54.166018   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:54.166688   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:01:56.628119   10836 main.go:141] libmachine: [stdout =====>] : 172.19.38.109
	
	I0419 17:01:56.628254   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:56.634652   10836 main.go:141] libmachine: Using SSH client type: native
	I0419 17:01:56.635374   10836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.38.109 22 <nil> <nil>}
	I0419 17:01:56.635374   10836 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0419 17:01:56.772717   10836 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713571316.777220859
	
	I0419 17:01:56.772874   10836 fix.go:216] guest clock: 1713571316.777220859
	I0419 17:01:56.772874   10836 fix.go:229] Guest: 2024-04-19 17:01:56.777220859 -0700 PDT Remote: 2024-04-19 17:01:52.0501045 -0700 PDT m=+133.798047801 (delta=4.727116359s)
	I0419 17:01:56.772990   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:01:58.787917   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:01:58.801737   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:01:58.801737   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:02:01.258256   10836 main.go:141] libmachine: [stdout =====>] : 172.19.38.109
	
	I0419 17:02:01.270419   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:02:01.277844   10836 main.go:141] libmachine: Using SSH client type: native
	I0419 17:02:01.278037   10836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.38.109 22 <nil> <nil>}
	I0419 17:02:01.278037   10836 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713571316
	I0419 17:02:01.429553   10836 main.go:141] libmachine: SSH cmd err, output: <nil>: Sat Apr 20 00:01:56 UTC 2024
	
	I0419 17:02:01.429553   10836 fix.go:236] clock set: Sat Apr 20 00:01:56 UTC 2024
	 (err=<nil>)
	I0419 17:02:01.429553   10836 start.go:83] releasing machines lock for "addons-586600", held for 2m17.3057978s
	I0419 17:02:01.429859   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:02:03.423765   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:02:03.423765   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:02:03.435927   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:02:05.872097   10836 main.go:141] libmachine: [stdout =====>] : 172.19.38.109
	
	I0419 17:02:05.872097   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:02:05.888874   10836 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 17:02:05.889129   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:02:05.901123   10836 ssh_runner.go:195] Run: cat /version.json
	I0419 17:02:05.901123   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-586600 ).state
	I0419 17:02:07.981669   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:02:07.981669   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:02:07.981761   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:02:07.981881   10836 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:02:07.982049   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:02:07.982080   10836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-586600 ).networkadapters[0]).ipaddresses[0]
	I0419 17:02:10.459023   10836 main.go:141] libmachine: [stdout =====>] : 172.19.38.109
	
	I0419 17:02:10.459023   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:02:10.472028   10836 sshutil.go:53] new ssh client: &{IP:172.19.38.109 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-586600\id_rsa Username:docker}
	I0419 17:02:10.493773   10836 main.go:141] libmachine: [stdout =====>] : 172.19.38.109
	
	I0419 17:02:10.493773   10836 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:02:10.495045   10836 sshutil.go:53] new ssh client: &{IP:172.19.38.109 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-586600\id_rsa Username:docker}
	I0419 17:02:10.564077   10836 ssh_runner.go:235] Completed: cat /version.json: (4.6629421s)
	I0419 17:02:10.577946   10836 ssh_runner.go:195] Run: systemctl --version
	I0419 17:02:10.677003   10836 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7880332s)
	I0419 17:02:10.690104   10836 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 17:02:10.699438   10836 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 17:02:10.713115   10836 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 17:02:10.745413   10836 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 17:02:10.745529   10836 start.go:494] detecting cgroup driver to use...
	I0419 17:02:10.745810   10836 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 17:02:10.801893   10836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0419 17:02:10.836684   10836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0419 17:02:10.855984   10836 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0419 17:02:10.871838   10836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0419 17:02:10.908797   10836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 17:02:10.945056   10836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0419 17:02:10.980537   10836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 17:02:11.019148   10836 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 17:02:11.053968   10836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0419 17:02:11.088928   10836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0419 17:02:11.127141   10836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0419 17:02:11.164997   10836 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 17:02:11.207632   10836 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 17:02:11.239940   10836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:02:11.443782   10836 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0419 17:02:11.476450   10836 start.go:494] detecting cgroup driver to use...
	I0419 17:02:11.493248   10836 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0419 17:02:11.531656   10836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 17:02:11.566482   10836 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 17:02:11.622980   10836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 17:02:11.664595   10836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 17:02:11.705169   10836 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0419 17:02:11.774423   10836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 17:02:11.792285   10836 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 17:02:11.845639   10836 ssh_runner.go:195] Run: which cri-dockerd
	I0419 17:02:11.864617   10836 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0419 17:02:11.882506   10836 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0419 17:02:11.926050   10836 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0419 17:02:12.130317   10836 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0419 17:02:12.322319   10836 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0419 17:02:12.322319   10836 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0419 17:02:12.367786   10836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:02:12.589129   10836 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 17:03:13.732651   10836 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1433746s)
	I0419 17:03:13.746487   10836 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0419 17:03:13.782978   10836 out.go:177] 
	W0419 17:03:13.783270   10836 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 20 00:01:40 addons-586600 systemd[1]: Starting Docker Application Container Engine...
	Apr 20 00:01:41 addons-586600 dockerd[669]: time="2024-04-20T00:01:41.021742991Z" level=info msg="Starting up"
	Apr 20 00:01:41 addons-586600 dockerd[669]: time="2024-04-20T00:01:41.022902411Z" level=info msg="containerd not running, starting managed containerd"
	Apr 20 00:01:41 addons-586600 dockerd[669]: time="2024-04-20T00:01:41.023936228Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=675
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.059775332Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.089876440Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.090015542Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.090166245Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.090189745Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.090312947Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.090408749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.090764555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.090909857Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.090934458Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.091000959Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.091107460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.091580968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.094558819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.094716021Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.094891624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.094991126Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.095107228Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.095291831Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.095392633Z" level=info msg="metadata content store policy set" policy=shared
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.122121383Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.122249585Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.122279286Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.122299986Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.122317687Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.122446489Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123035499Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123219802Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123360404Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123384205Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123407405Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123424105Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123440006Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123458806Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123514307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123531107Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123545707Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123559108Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123589208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123607008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123622209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123637809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123652209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123667509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123680610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123787411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123883713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123978815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124000415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124015415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124029615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124047816Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124079016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124095217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124109917Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124162718Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124183918Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124198018Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124212119Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124343121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124426622Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124446722Z" level=info msg="NRI interface is disabled by configuration."
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124779828Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124920930Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124994632Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.125035532Z" level=info msg="containerd successfully booted in 0.067072s"
	Apr 20 00:01:42 addons-586600 dockerd[669]: time="2024-04-20T00:01:42.103920252Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 20 00:01:42 addons-586600 dockerd[669]: time="2024-04-20T00:01:42.139705417Z" level=info msg="Loading containers: start."
	Apr 20 00:01:42 addons-586600 dockerd[669]: time="2024-04-20T00:01:42.440313825Z" level=info msg="Loading containers: done."
	Apr 20 00:01:42 addons-586600 dockerd[669]: time="2024-04-20T00:01:42.463875669Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 20 00:01:42 addons-586600 dockerd[669]: time="2024-04-20T00:01:42.464102573Z" level=info msg="Daemon has completed initialization"
	Apr 20 00:01:42 addons-586600 dockerd[669]: time="2024-04-20T00:01:42.587018584Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 20 00:01:42 addons-586600 systemd[1]: Started Docker Application Container Engine.
	Apr 20 00:01:42 addons-586600 dockerd[669]: time="2024-04-20T00:01:42.589392329Z" level=info msg="API listen on [::]:2376"
	Apr 20 00:02:12 addons-586600 systemd[1]: Stopping Docker Application Container Engine...
	Apr 20 00:02:12 addons-586600 dockerd[669]: time="2024-04-20T00:02:12.622976832Z" level=info msg="Processing signal 'terminated'"
	Apr 20 00:02:12 addons-586600 dockerd[669]: time="2024-04-20T00:02:12.624877138Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 20 00:02:12 addons-586600 dockerd[669]: time="2024-04-20T00:02:12.625625340Z" level=info msg="Daemon shutdown complete"
	Apr 20 00:02:12 addons-586600 dockerd[669]: time="2024-04-20T00:02:12.625691440Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 20 00:02:12 addons-586600 dockerd[669]: time="2024-04-20T00:02:12.625754841Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 20 00:02:13 addons-586600 systemd[1]: docker.service: Deactivated successfully.
	Apr 20 00:02:13 addons-586600 systemd[1]: Stopped Docker Application Container Engine.
	Apr 20 00:02:13 addons-586600 systemd[1]: Starting Docker Application Container Engine...
	Apr 20 00:02:13 addons-586600 dockerd[1020]: time="2024-04-20T00:02:13.705997981Z" level=info msg="Starting up"
	Apr 20 00:03:13 addons-586600 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 20 00:03:13 addons-586600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 20 00:03:13 addons-586600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 20 00:03:13 addons-586600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 20 00:01:40 addons-586600 systemd[1]: Starting Docker Application Container Engine...
	Apr 20 00:01:41 addons-586600 dockerd[669]: time="2024-04-20T00:01:41.021742991Z" level=info msg="Starting up"
	Apr 20 00:01:41 addons-586600 dockerd[669]: time="2024-04-20T00:01:41.022902411Z" level=info msg="containerd not running, starting managed containerd"
	Apr 20 00:01:41 addons-586600 dockerd[669]: time="2024-04-20T00:01:41.023936228Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=675
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.059775332Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.089876440Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.090015542Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.090166245Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.090189745Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.090312947Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.090408749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.090764555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.090909857Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.090934458Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.091000959Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.091107460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.091580968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.094558819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.094716021Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.094891624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.094991126Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.095107228Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.095291831Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.095392633Z" level=info msg="metadata content store policy set" policy=shared
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.122121383Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.122249585Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.122279286Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.122299986Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.122317687Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.122446489Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123035499Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123219802Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123360404Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123384205Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123407405Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123424105Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123440006Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123458806Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123514307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123531107Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123545707Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123559108Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123589208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123607008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123622209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123637809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123652209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123667509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123680610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123787411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123883713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.123978815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124000415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124015415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124029615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124047816Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124079016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124095217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124109917Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124162718Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124183918Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124198018Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124212119Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124343121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124426622Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124446722Z" level=info msg="NRI interface is disabled by configuration."
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124779828Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124920930Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.124994632Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 20 00:01:41 addons-586600 dockerd[675]: time="2024-04-20T00:01:41.125035532Z" level=info msg="containerd successfully booted in 0.067072s"
	Apr 20 00:01:42 addons-586600 dockerd[669]: time="2024-04-20T00:01:42.103920252Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 20 00:01:42 addons-586600 dockerd[669]: time="2024-04-20T00:01:42.139705417Z" level=info msg="Loading containers: start."
	Apr 20 00:01:42 addons-586600 dockerd[669]: time="2024-04-20T00:01:42.440313825Z" level=info msg="Loading containers: done."
	Apr 20 00:01:42 addons-586600 dockerd[669]: time="2024-04-20T00:01:42.463875669Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 20 00:01:42 addons-586600 dockerd[669]: time="2024-04-20T00:01:42.464102573Z" level=info msg="Daemon has completed initialization"
	Apr 20 00:01:42 addons-586600 dockerd[669]: time="2024-04-20T00:01:42.587018584Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 20 00:01:42 addons-586600 systemd[1]: Started Docker Application Container Engine.
	Apr 20 00:01:42 addons-586600 dockerd[669]: time="2024-04-20T00:01:42.589392329Z" level=info msg="API listen on [::]:2376"
	Apr 20 00:02:12 addons-586600 systemd[1]: Stopping Docker Application Container Engine...
	Apr 20 00:02:12 addons-586600 dockerd[669]: time="2024-04-20T00:02:12.622976832Z" level=info msg="Processing signal 'terminated'"
	Apr 20 00:02:12 addons-586600 dockerd[669]: time="2024-04-20T00:02:12.624877138Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 20 00:02:12 addons-586600 dockerd[669]: time="2024-04-20T00:02:12.625625340Z" level=info msg="Daemon shutdown complete"
	Apr 20 00:02:12 addons-586600 dockerd[669]: time="2024-04-20T00:02:12.625691440Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 20 00:02:12 addons-586600 dockerd[669]: time="2024-04-20T00:02:12.625754841Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 20 00:02:13 addons-586600 systemd[1]: docker.service: Deactivated successfully.
	Apr 20 00:02:13 addons-586600 systemd[1]: Stopped Docker Application Container Engine.
	Apr 20 00:02:13 addons-586600 systemd[1]: Starting Docker Application Container Engine...
	Apr 20 00:02:13 addons-586600 dockerd[1020]: time="2024-04-20T00:02:13.705997981Z" level=info msg="Starting up"
	Apr 20 00:03:13 addons-586600 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 20 00:03:13 addons-586600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 20 00:03:13 addons-586600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 20 00:03:13 addons-586600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0419 17:03:13.786221   10836 out.go:239] * 
	* 
	W0419 17:03:13.787711   10836 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 17:03:13.796290   10836 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-windows-amd64.exe start -p addons-586600 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: exit status 90
--- FAIL: TestAddons/Setup (215.75s)

                                                
                                    
x
+
TestDockerFlags (582.69s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-302200 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p docker-flags-302200 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: exit status 90 (8m28.4310698s)

                                                
                                                
-- stdout --
	* [docker-flags-302200] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "docker-flags-302200" primary control-plane node in "docker-flags-302200" cluster
	* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 19:28:08.982314   14472 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0419 19:28:08.984316   14472 out.go:291] Setting OutFile to fd 1932 ...
	I0419 19:28:08.985041   14472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 19:28:08.985041   14472 out.go:304] Setting ErrFile to fd 1940...
	I0419 19:28:08.985041   14472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 19:28:09.010078   14472 out.go:298] Setting JSON to false
	I0419 19:28:09.018378   14472 start.go:129] hostinfo: {"hostname":"minikube1","uptime":18547,"bootTime":1713561541,"procs":213,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0419 19:28:09.018378   14472 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 19:28:09.024284   14472 out.go:177] * [docker-flags-302200] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0419 19:28:09.029277   14472 notify.go:220] Checking for updates...
	I0419 19:28:09.031286   14472 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 19:28:09.033284   14472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 19:28:09.036271   14472 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0419 19:28:09.038279   14472 out.go:177]   - MINIKUBE_LOCATION=18703
	I0419 19:28:09.040287   14472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 19:28:09.044285   14472 config.go:182] Loaded profile config "cert-expiration-098300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 19:28:09.044285   14472 config.go:182] Loaded profile config "force-systemd-env-320900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 19:28:09.044285   14472 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 19:28:09.045288   14472 config.go:182] Loaded profile config "running-upgrade-265900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 19:28:09.045288   14472 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 19:28:14.441015   14472 out.go:177] * Using the hyperv driver based on user configuration
	I0419 19:28:14.444447   14472 start.go:297] selected driver: hyperv
	I0419 19:28:14.444447   14472 start.go:901] validating driver "hyperv" against <nil>
	I0419 19:28:14.444447   14472 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 19:28:14.504167   14472 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 19:28:14.506518   14472 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0419 19:28:14.506518   14472 cni.go:84] Creating CNI manager for ""
	I0419 19:28:14.506518   14472 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 19:28:14.506518   14472 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 19:28:14.506979   14472 start.go:340] cluster config:
	{Name:docker-flags-302200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-302200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 19:28:14.507289   14472 iso.go:125] acquiring lock: {Name:mk297f2abb67cbbcd36490c866afe693892d0c05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 19:28:14.510601   14472 out.go:177] * Starting "docker-flags-302200" primary control-plane node in "docker-flags-302200" cluster
	I0419 19:28:14.513352   14472 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 19:28:14.513352   14472 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0419 19:28:14.513352   14472 cache.go:56] Caching tarball of preloaded images
	I0419 19:28:14.513892   14472 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0419 19:28:14.514302   14472 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 19:28:14.514528   14472 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\docker-flags-302200\config.json ...
	I0419 19:28:14.514700   14472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\docker-flags-302200\config.json: {Name:mk91e1f5adb5fee442d4b76e84310e0be3eaf0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 19:28:14.514967   14472 start.go:360] acquireMachinesLock for docker-flags-302200: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 19:33:02.157408   14472 start.go:364] duration metric: took 4m47.6418665s to acquireMachinesLock for "docker-flags-302200"
	I0419 19:33:02.158154   14472 start.go:93] Provisioning new machine with config: &{Name:docker-flags-302200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:docker-flags-302200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 19:33:02.158416   14472 start.go:125] createHost starting for "" (driver="hyperv")
	I0419 19:33:02.161924   14472 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0419 19:33:02.162300   14472 start.go:159] libmachine.API.Create for "docker-flags-302200" (driver="hyperv")
	I0419 19:33:02.162300   14472 client.go:168] LocalClient.Create starting
	I0419 19:33:02.163127   14472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0419 19:33:02.163359   14472 main.go:141] libmachine: Decoding PEM data...
	I0419 19:33:02.163420   14472 main.go:141] libmachine: Parsing certificate...
	I0419 19:33:02.163704   14472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0419 19:33:02.164003   14472 main.go:141] libmachine: Decoding PEM data...
	I0419 19:33:02.164062   14472 main.go:141] libmachine: Parsing certificate...
	I0419 19:33:02.164203   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0419 19:33:04.095259   14472 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0419 19:33:04.095259   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:33:04.095259   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0419 19:33:05.842129   14472 main.go:141] libmachine: [stdout =====>] : False
	
	I0419 19:33:05.842194   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:33:05.842194   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0419 19:33:07.378295   14472 main.go:141] libmachine: [stdout =====>] : True
	
	I0419 19:33:07.378295   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:33:07.378472   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0419 19:33:11.070168   14472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0419 19:33:11.082399   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:33:11.084858   14472 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0419 19:33:11.527157   14472 main.go:141] libmachine: Creating SSH key...
	I0419 19:33:11.820159   14472 main.go:141] libmachine: Creating VM...
	I0419 19:33:11.820501   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0419 19:33:14.879284   14472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0419 19:33:14.879284   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:33:14.879517   14472 main.go:141] libmachine: Using switch "Default Switch"
	I0419 19:33:14.879595   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0419 19:33:16.644840   14472 main.go:141] libmachine: [stdout =====>] : True
	
	I0419 19:33:16.644840   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:33:16.645020   14472 main.go:141] libmachine: Creating VHD
	I0419 19:33:16.645020   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\docker-flags-302200\fixed.vhd' -SizeBytes 10MB -Fixed
	I0419 19:33:20.543762   14472 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\docker-flags-302200\fixed.
	                          vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : A31C5553-8981-4B92-B42D-C28002723FBC
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0419 19:33:20.543762   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:33:20.543762   14472 main.go:141] libmachine: Writing magic tar header
	I0419 19:33:20.543762   14472 main.go:141] libmachine: Writing SSH key tar header
	I0419 19:33:20.557840   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\docker-flags-302200\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\docker-flags-302200\disk.vhd' -VHDType Dynamic -DeleteSource
	I0419 19:33:23.681675   14472 main.go:141] libmachine: [stdout =====>] : 
	I0419 19:33:23.681675   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:33:23.681675   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\docker-flags-302200\disk.vhd' -SizeBytes 20000MB
	I0419 19:33:26.325243   14472 main.go:141] libmachine: [stdout =====>] : 
	I0419 19:33:26.325243   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:33:26.338369   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM docker-flags-302200 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\docker-flags-302200' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I0419 19:33:31.720071   14472 main.go:141] libmachine: [stdout =====>] : 
	Name                State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                ----- ----------- ----------------- ------   ------             -------
	docker-flags-302200 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0419 19:33:31.732344   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:33:31.732344   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName docker-flags-302200 -DynamicMemoryEnabled $false
	I0419 19:33:33.927249   14472 main.go:141] libmachine: [stdout =====>] : 
	I0419 19:33:33.927315   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:33:33.927371   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor docker-flags-302200 -Count 2
	I0419 19:33:36.105561   14472 main.go:141] libmachine: [stdout =====>] : 
	I0419 19:33:36.105640   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:33:36.105662   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName docker-flags-302200 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\docker-flags-302200\boot2docker.iso'
	I0419 19:33:38.653664   14472 main.go:141] libmachine: [stdout =====>] : 
	I0419 19:33:38.663737   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:33:38.663829   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName docker-flags-302200 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\docker-flags-302200\disk.vhd'
	I0419 19:33:41.237135   14472 main.go:141] libmachine: [stdout =====>] : 
	I0419 19:33:41.237135   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:33:41.237135   14472 main.go:141] libmachine: Starting VM...
	I0419 19:33:41.237135   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM docker-flags-302200
	I0419 19:33:48.794735   14472 main.go:141] libmachine: [stdout =====>] : 
	I0419 19:33:48.795783   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:33:48.795783   14472 main.go:141] libmachine: Waiting for host to start...
	I0419 19:33:48.795783   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:33:51.043307   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:33:51.043393   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:33:51.043393   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:33:53.606418   14472 main.go:141] libmachine: [stdout =====>] : 
	I0419 19:33:53.606418   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:33:54.624042   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:33:56.926947   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:33:56.932838   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:33:56.932907   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:33:59.671464   14472 main.go:141] libmachine: [stdout =====>] : 
	I0419 19:33:59.673311   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:00.680295   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:34:03.406757   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:34:03.406757   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:03.413990   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:34:06.307462   14472 main.go:141] libmachine: [stdout =====>] : 
	I0419 19:34:06.307462   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:07.317303   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:34:09.679356   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:34:09.679356   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:09.679570   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:34:12.410129   14472 main.go:141] libmachine: [stdout =====>] : 
	I0419 19:34:12.410129   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:13.424524   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:34:15.572528   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:34:15.572624   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:15.572764   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:34:18.128831   14472 main.go:141] libmachine: [stdout =====>] : 172.19.41.36
	
	I0419 19:34:18.128831   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:18.128831   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:34:20.199481   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:34:20.199481   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:20.199481   14472 machine.go:94] provisionDockerMachine start ...
	I0419 19:34:20.211958   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:34:22.314724   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:34:22.314724   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:22.327331   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:34:24.836169   14472 main.go:141] libmachine: [stdout =====>] : 172.19.41.36
	
	I0419 19:34:24.848212   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:24.854274   14472 main.go:141] libmachine: Using SSH client type: native
	I0419 19:34:24.854852   14472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.41.36 22 <nil> <nil>}
	I0419 19:34:24.854852   14472 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 19:34:24.988005   14472 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0419 19:34:24.988005   14472 buildroot.go:166] provisioning hostname "docker-flags-302200"
	I0419 19:34:24.988099   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:34:27.037468   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:34:27.037468   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:27.050822   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:34:29.650004   14472 main.go:141] libmachine: [stdout =====>] : 172.19.41.36
	
	I0419 19:34:29.650004   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:29.671482   14472 main.go:141] libmachine: Using SSH client type: native
	I0419 19:34:29.672085   14472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.41.36 22 <nil> <nil>}
	I0419 19:34:29.672115   14472 main.go:141] libmachine: About to run SSH command:
	sudo hostname docker-flags-302200 && echo "docker-flags-302200" | sudo tee /etc/hostname
	I0419 19:34:29.840012   14472 main.go:141] libmachine: SSH cmd err, output: <nil>: docker-flags-302200
	
	I0419 19:34:29.840093   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:34:31.968371   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:34:31.968371   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:31.982341   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:34:34.564624   14472 main.go:141] libmachine: [stdout =====>] : 172.19.41.36
	
	I0419 19:34:34.578029   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:34.584549   14472 main.go:141] libmachine: Using SSH client type: native
	I0419 19:34:34.584819   14472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.41.36 22 <nil> <nil>}
	I0419 19:34:34.584819   14472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdocker-flags-302200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 docker-flags-302200/g' /etc/hosts;
				else 
					echo '127.0.1.1 docker-flags-302200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 19:34:34.728723   14472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 19:34:34.728723   14472 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0419 19:34:34.728723   14472 buildroot.go:174] setting up certificates
	I0419 19:34:34.728723   14472 provision.go:84] configureAuth start
	I0419 19:34:34.728723   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:34:36.782420   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:34:36.782420   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:36.795070   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:34:39.234735   14472 main.go:141] libmachine: [stdout =====>] : 172.19.41.36
	
	I0419 19:34:39.234735   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:39.234735   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:34:41.276507   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:34:41.276507   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:41.288622   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:34:43.816648   14472 main.go:141] libmachine: [stdout =====>] : 172.19.41.36
	
	I0419 19:34:43.816648   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:43.829407   14472 provision.go:143] copyHostCerts
	I0419 19:34:43.829560   14472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0419 19:34:43.829766   14472 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0419 19:34:43.829766   14472 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0419 19:34:43.830381   14472 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0419 19:34:43.831024   14472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0419 19:34:43.831024   14472 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0419 19:34:43.831024   14472 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0419 19:34:43.832001   14472 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0419 19:34:43.832935   14472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0419 19:34:43.832935   14472 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0419 19:34:43.832935   14472 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0419 19:34:43.833519   14472 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0419 19:34:43.834320   14472 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.docker-flags-302200 san=[127.0.0.1 172.19.41.36 docker-flags-302200 localhost minikube]
	I0419 19:34:44.100761   14472 provision.go:177] copyRemoteCerts
	I0419 19:34:44.111090   14472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 19:34:44.111090   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:34:46.142849   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:34:46.142849   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:46.156069   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:34:48.639864   14472 main.go:141] libmachine: [stdout =====>] : 172.19.41.36
	
	I0419 19:34:48.647128   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:48.647410   14472 sshutil.go:53] new ssh client: &{IP:172.19.41.36 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\docker-flags-302200\id_rsa Username:docker}
	I0419 19:34:48.758814   14472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6477148s)
	I0419 19:34:48.759358   14472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0419 19:34:48.759772   14472 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0419 19:34:48.808273   14472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0419 19:34:48.808704   14472 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0419 19:34:48.859319   14472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0419 19:34:48.859465   14472 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 19:34:48.906888   14472 provision.go:87] duration metric: took 14.1781367s to configureAuth
	I0419 19:34:48.906888   14472 buildroot.go:189] setting minikube options for container-runtime
	I0419 19:34:48.907479   14472 config.go:182] Loaded profile config "docker-flags-302200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 19:34:48.907698   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:34:50.991804   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:34:50.991804   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:50.991804   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:34:53.471113   14472 main.go:141] libmachine: [stdout =====>] : 172.19.41.36
	
	I0419 19:34:53.483245   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:53.489713   14472 main.go:141] libmachine: Using SSH client type: native
	I0419 19:34:53.490785   14472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.41.36 22 <nil> <nil>}
	I0419 19:34:53.490785   14472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0419 19:34:53.625162   14472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0419 19:34:53.625242   14472 buildroot.go:70] root file system type: tmpfs
	I0419 19:34:53.625474   14472 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0419 19:34:53.625557   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:34:55.693211   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:34:55.693211   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:55.693303   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:34:58.143920   14472 main.go:141] libmachine: [stdout =====>] : 172.19.41.36
	
	I0419 19:34:58.143920   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:34:58.162761   14472 main.go:141] libmachine: Using SSH client type: native
	I0419 19:34:58.163496   14472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.41.36 22 <nil> <nil>}
	I0419 19:34:58.163496   14472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="FOO=BAR"
	Environment="BAZ=BAT"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 --debug --icc=true 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0419 19:34:58.321207   14472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=FOO=BAR
	Environment=BAZ=BAT
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 --debug --icc=true 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0419 19:34:58.321332   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:35:00.376943   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:35:00.376943   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:35:00.377052   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:35:02.922390   14472 main.go:141] libmachine: [stdout =====>] : 172.19.41.36
	
	I0419 19:35:02.922390   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:35:02.944302   14472 main.go:141] libmachine: Using SSH client type: native
	I0419 19:35:02.944503   14472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.41.36 22 <nil> <nil>}
	I0419 19:35:02.944503   14472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0419 19:35:05.160702   14472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0419 19:35:05.160702   14472 machine.go:97] duration metric: took 44.961131s to provisionDockerMachine
	I0419 19:35:05.160801   14472 client.go:171] duration metric: took 2m2.9982547s to LocalClient.Create
	I0419 19:35:05.160801   14472 start.go:167] duration metric: took 2m2.9982547s to libmachine.API.Create "docker-flags-302200"
	I0419 19:35:05.160801   14472 start.go:293] postStartSetup for "docker-flags-302200" (driver="hyperv")
	I0419 19:35:05.160905   14472 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 19:35:05.172892   14472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 19:35:05.172892   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:35:07.307051   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:35:07.307051   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:35:07.307205   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:35:09.822078   14472 main.go:141] libmachine: [stdout =====>] : 172.19.41.36
	
	I0419 19:35:09.822078   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:35:09.822790   14472 sshutil.go:53] new ssh client: &{IP:172.19.41.36 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\docker-flags-302200\id_rsa Username:docker}
	I0419 19:35:09.929829   14472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7569275s)
	I0419 19:35:09.944473   14472 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 19:35:09.953232   14472 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 19:35:09.953232   14472 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0419 19:35:09.954058   14472 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0419 19:35:09.954853   14472 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> 34162.pem in /etc/ssl/certs
	I0419 19:35:09.954853   14472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /etc/ssl/certs/34162.pem
	I0419 19:35:09.976129   14472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 19:35:10.000275   14472 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /etc/ssl/certs/34162.pem (1708 bytes)
	I0419 19:35:10.047817   14472 start.go:296] duration metric: took 4.8869022s for postStartSetup
	I0419 19:35:10.050977   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:35:12.125519   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:35:12.138671   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:35:12.139321   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:35:14.663150   14472 main.go:141] libmachine: [stdout =====>] : 172.19.41.36
	
	I0419 19:35:14.663150   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:35:14.677060   14472 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\docker-flags-302200\config.json ...
	I0419 19:35:14.680541   14472 start.go:128] duration metric: took 2m12.5218603s to createHost
	I0419 19:35:14.680611   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:35:16.718232   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:35:16.731448   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:35:16.731448   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:35:19.249727   14472 main.go:141] libmachine: [stdout =====>] : 172.19.41.36
	
	I0419 19:35:19.249727   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:35:19.256268   14472 main.go:141] libmachine: Using SSH client type: native
	I0419 19:35:19.256854   14472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.41.36 22 <nil> <nil>}
	I0419 19:35:19.256854   14472 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0419 19:35:19.385409   14472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713580519.392023655
	
	I0419 19:35:19.385409   14472 fix.go:216] guest clock: 1713580519.392023655
	I0419 19:35:19.385409   14472 fix.go:229] Guest: 2024-04-19 19:35:19.392023655 -0700 PDT Remote: 2024-04-19 19:35:14.6805417 -0700 PDT m=+425.798926301 (delta=4.711481955s)
	I0419 19:35:19.385409   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:35:21.444709   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:35:21.444709   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:35:21.444709   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:35:23.960342   14472 main.go:141] libmachine: [stdout =====>] : 172.19.41.36
	
	I0419 19:35:23.960342   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:35:23.967259   14472 main.go:141] libmachine: Using SSH client type: native
	I0419 19:35:23.967259   14472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.41.36 22 <nil> <nil>}
	I0419 19:35:23.967259   14472 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713580519
	I0419 19:35:24.111842   14472 main.go:141] libmachine: SSH cmd err, output: <nil>: Sat Apr 20 02:35:19 UTC 2024
	
	I0419 19:35:24.111959   14472 fix.go:236] clock set: Sat Apr 20 02:35:19 UTC 2024
	 (err=<nil>)
	I0419 19:35:24.111959   14472 start.go:83] releasing machines lock for "docker-flags-302200", held for 2m21.9542672s
	I0419 19:35:24.112217   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:35:26.226636   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:35:26.226636   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:35:26.234361   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:35:28.862092   14472 main.go:141] libmachine: [stdout =====>] : 172.19.41.36
	
	I0419 19:35:28.867933   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:35:28.874477   14472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 19:35:28.874671   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:35:28.887124   14472 ssh_runner.go:195] Run: cat /version.json
	I0419 19:35:28.888600   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-302200 ).state
	I0419 19:35:31.091327   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:35:31.091327   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:35:31.091327   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:35:31.104874   14472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:35:31.105022   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:35:31.105145   14472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-302200 ).networkadapters[0]).ipaddresses[0]
	I0419 19:35:33.788832   14472 main.go:141] libmachine: [stdout =====>] : 172.19.41.36
	
	I0419 19:35:33.788832   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:35:33.789243   14472 sshutil.go:53] new ssh client: &{IP:172.19.41.36 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\docker-flags-302200\id_rsa Username:docker}
	I0419 19:35:33.819661   14472 main.go:141] libmachine: [stdout =====>] : 172.19.41.36
	
	I0419 19:35:33.825774   14472 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:35:33.825996   14472 sshutil.go:53] new ssh client: &{IP:172.19.41.36 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\docker-flags-302200\id_rsa Username:docker}
	I0419 19:35:33.975514   14472 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1008637s)
	I0419 19:35:33.975514   14472 ssh_runner.go:235] Completed: cat /version.json: (5.08838s)
	I0419 19:35:33.987715   14472 ssh_runner.go:195] Run: systemctl --version
	I0419 19:35:34.013269   14472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 19:35:34.023762   14472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 19:35:34.038691   14472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 19:35:34.069693   14472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 19:35:34.069786   14472 start.go:494] detecting cgroup driver to use...
	I0419 19:35:34.069991   14472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 19:35:34.119437   14472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0419 19:35:34.159132   14472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0419 19:35:34.181166   14472 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0419 19:35:34.192001   14472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0419 19:35:34.231432   14472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 19:35:34.271692   14472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0419 19:35:34.304351   14472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 19:35:34.341963   14472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 19:35:34.378371   14472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0419 19:35:34.419546   14472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0419 19:35:34.465033   14472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0419 19:35:34.504068   14472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 19:35:34.537399   14472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 19:35:34.571268   14472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 19:35:34.779455   14472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0419 19:35:34.815707   14472 start.go:494] detecting cgroup driver to use...
	I0419 19:35:34.832648   14472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0419 19:35:34.877295   14472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 19:35:34.919260   14472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 19:35:34.970584   14472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 19:35:35.012863   14472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 19:35:35.050574   14472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0419 19:35:35.118736   14472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 19:35:35.145101   14472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 19:35:35.203148   14472 ssh_runner.go:195] Run: which cri-dockerd
	I0419 19:35:35.224261   14472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0419 19:35:35.249732   14472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0419 19:35:35.303628   14472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0419 19:35:35.519575   14472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0419 19:35:35.710821   14472 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0419 19:35:35.710821   14472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0419 19:35:35.760401   14472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 19:35:35.966193   14472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 19:36:37.102937   14472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1365173s)
	I0419 19:36:37.113795   14472 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0419 19:36:37.238329   14472 out.go:177] 
	W0419 19:36:37.253320   14472 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 20 02:35:03 docker-flags-302200 systemd[1]: Starting Docker Application Container Engine...
	Apr 20 02:35:03 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:03.586595811Z" level=info msg="Starting up"
	Apr 20 02:35:03 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:03.588000224Z" level=debug msg="Listener created for HTTP on tcp (0.0.0.0:2376)"
	Apr 20 02:35:03 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:03.588230926Z" level=debug msg="Listener created for HTTP on unix (/var/run/docker.sock)"
	Apr 20 02:35:03 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:03.588304726Z" level=info msg="containerd not running, starting managed containerd"
	Apr 20 02:35:03 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:03.591979258Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=664
	Apr 20 02:35:03 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:03.592556663Z" level=debug msg="created containerd monitoring client" address=/var/run/docker/containerd/containerd.sock module=libcontainerd
	Apr 20 02:35:03 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:03.592936966Z" level=debug msg="2024/04/20 02:35:03 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/var/run/docker/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /var/run/docker/containerd/containerd.sock: connect: no such file or directory\"" library=grpc
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.628468775Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.655666611Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.655822212Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.655910613Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.656031314Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.656146415Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.656251016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.656497418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.656607519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.656632119Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.656645519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.656894222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.657353126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.660367552Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.660476053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.660670354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.660693555Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.660872356Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.660964757Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.660981257Z" level=info msg="metadata content store policy set" policy=shared
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.684665363Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.684912165Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.685033566Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.685064566Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.685083366Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.685262268Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.685866173Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686027974Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686050675Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686066775Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686083075Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686122775Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686137575Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686171876Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686190476Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686206076Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686221476Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686259876Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686278077Z" level=debug msg="No blockio config file specified, blockio not configured"
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686286777Z" level=debug msg="No RDT config file specified, RDT not configured"
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686307477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686326177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686348577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686364777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686391978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686409378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686423778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686466478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686483778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686501879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686515579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686530179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686544379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686577879Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686618080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686652080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686794181Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686897982Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686951582Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686968383Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686983083Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.687153184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.687291885Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.687330786Z" level=info msg="NRI interface is disabled by configuration."
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.687783690Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.687961391Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.688010192Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.688083792Z" level=debug msg="sd notification" notified=false state="READY=1"
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.688103692Z" level=info msg="containerd successfully booted in 0.061259s"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.615267073Z" level=debug msg="Golang's threads limit set to 11970"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.616655284Z" level=debug msg="metrics API listening on /var/run/docker/metrics.sock"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.624965552Z" level=debug msg="Using default logging driver json-file"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.625244254Z" level=debug msg="processing event stream" module=libcontainerd namespace=plugins.moby
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.625368955Z" level=debug msg="No quota support for local volumes in /var/lib/docker/volumes: Filesystem does not support, or has not enabled quotas"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.664598275Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.678517688Z" level=debug msg="successfully detected metacopy status" storage-driver=overlay2 usingMetacopy=false
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.687336060Z" level=debug msg="backingFs=extfs, projectQuotaSupported=false, usingMetacopy=false, indexOff=\"index=off,\", userxattr=\"\"" storage-driver=overlay2
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.687542261Z" level=debug msg="Initialized graph driver overlay2"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.696406833Z" level=debug msg="Max Concurrent Downloads: 3"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.696464834Z" level=debug msg="Max Concurrent Uploads: 5"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.696488634Z" level=debug msg="Max Download Attempts: 5"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.696514334Z" level=info msg="Loading containers: start."
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.696575635Z" level=debug msg="Option DefaultDriver: bridge"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.696610735Z" level=debug msg="Option DefaultNetwork: bridge"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.696622035Z" level=debug msg="Network Control Plane MTU: 1500"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.696879437Z" level=debug msg="processing event stream" module=libcontainerd namespace=moby
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.719208619Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -j DOCKER-ISOLATION]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.721911541Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -D PREROUTING -m addrtype --dst-type LOCAL -j DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.733243233Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -D OUTPUT -m addrtype --dst-type LOCAL ! --dst 127.0.0.0/8 -j DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.737422367Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -D OUTPUT -m addrtype --dst-type LOCAL -j DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.741830203Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -D PREROUTING]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.744433524Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -D OUTPUT]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.748094454Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -F DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.750522774Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -X DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.752716792Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -F DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.754561307Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -X DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.756435122Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -F DOCKER-ISOLATION-STAGE-1]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.758509739Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -X DOCKER-ISOLATION-STAGE-1]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.760360154Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -F DOCKER-ISOLATION-STAGE-2]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.762082068Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -X DOCKER-ISOLATION-STAGE-2]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.764047684Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -F DOCKER-ISOLATION]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.765896999Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -X DOCKER-ISOLATION]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.767501412Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -n -L DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.769602329Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -N DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.772032149Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -n -L DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.774002465Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -N DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.776067482Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -n -L DOCKER-ISOLATION-STAGE-1]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.777913897Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -N DOCKER-ISOLATION-STAGE-1]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.779887713Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -n -L DOCKER-ISOLATION-STAGE-2]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.781720428Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -N DOCKER-ISOLATION-STAGE-2]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.783681944Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-1 -j RETURN]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.785871161Z" level=debug msg="/usr/sbin/iptables, [--wait -A DOCKER-ISOLATION-STAGE-1 -j RETURN]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.787809577Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-2 -j RETURN]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.789953195Z" level=debug msg="/usr/sbin/iptables, [--wait -A DOCKER-ISOLATION-STAGE-2 -j RETURN]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.836455973Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -n -L DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.838804492Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -N DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.840500706Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-USER -j RETURN]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.843398230Z" level=debug msg="/usr/sbin/iptables, [--wait -A DOCKER-USER -j RETURN]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.845799549Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -j DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.847850766Z" level=debug msg="/usr/sbin/iptables, [--wait -I FORWARD -j DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.866615218Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -n -L DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.869095639Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-USER -j RETURN]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.871370857Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -j DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.873485074Z" level=debug msg="/usr/sbin/iptables, [--wait -D FORWARD -j DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.875458590Z" level=debug msg="/usr/sbin/iptables, [--wait -I FORWARD -j DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.884098361Z" level=debug msg="Allocating IPv4 pools for network bridge (b29bff1d589b69d40bf875a8812adad208d9e86a8198d59028bc13809120afd2)"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.884186961Z" level=debug msg="RequestPool(LocalDefault, , , _, false)"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.884388763Z" level=debug msg="RequestAddress(LocalDefault/172.17.0.0/16, <nil>, map[RequestAddressType:com.docker.network.gateway])"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.884520564Z" level=debug msg="Request address PoolID:172.17.0.0/16 Bits: 65536, Unselected: 65534, Sequence: (0x80000000, 1)->(0x0, 2046)->(0x1, 1)->end Curr:0 Serial:false PrefAddress:invalid IP "
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.884586365Z" level=debug msg="Did not find any interface with name docker0: Link not found"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.884703666Z" level=debug msg="Setting bridge mac address to 02:42:1c:84:ae:ca"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.885456272Z" level=debug msg="Assigning address to bridge interface docker0: 172.17.0.1/16"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.885568273Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -C POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.892993733Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -I POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.895986057Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -C DOCKER -i docker0 -j RETURN]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.897510870Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -I DOCKER -i docker0 -j RETURN]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.899135283Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -C POSTROUTING -m addrtype --src-type LOCAL -o docker0 -j MASQUERADE]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.901015498Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -i docker0 -o docker0 -j DROP]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.903077415Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -i docker0 -o docker0 -j ACCEPT]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.905549935Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -I FORWARD -i docker0 -o docker0 -j ACCEPT]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.908037856Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -i docker0 ! -o docker0 -j ACCEPT]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.910361274Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -I FORWARD -i docker0 ! -o docker0 -j ACCEPT]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.912628093Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -C PREROUTING -m addrtype --dst-type LOCAL -j DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.914765810Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.917881638Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -C OUTPUT -m addrtype --dst-type LOCAL -j DOCKER ! --dst 127.0.0.0/8]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.920060559Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -A OUTPUT -m addrtype --dst-type LOCAL -j DOCKER ! --dst 127.0.0.0/8]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.922195379Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -o docker0 -j DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.925316808Z" level=debug msg="/usr/sbin/iptables, [--wait -I FORWARD -o docker0 -j DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.927144725Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.932210972Z" level=debug msg="/usr/sbin/iptables, [--wait -I FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.934597494Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -j DOCKER-ISOLATION-STAGE-1]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.936575013Z" level=debug msg="/usr/sbin/iptables, [--wait -I FORWARD -j DOCKER-ISOLATION-STAGE-1]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.938619532Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.940469349Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -I DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.942657370Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.944847090Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -I DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.978560305Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -n -L DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.981015428Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-USER -j RETURN]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.982956747Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -j DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.985150067Z" level=debug msg="/usr/sbin/iptables, [--wait -D FORWARD -j DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.987186986Z" level=debug msg="/usr/sbin/iptables, [--wait -I FORWARD -j DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.989449707Z" level=info msg="Loading containers: done."
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.014356772Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.014620372Z" level=info msg="Daemon has completed initialization"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.123811346Z" level=debug msg="Registering routers"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.123934146Z" level=debug msg="Registering GET, /containers/{name:.*}/checkpoints"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.124062645Z" level=debug msg="Registering POST, /containers/{name:.*}/checkpoints"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.124147245Z" level=debug msg="Registering DELETE, /containers/{name}/checkpoints/{checkpoint}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.124281144Z" level=debug msg="Registering HEAD, /containers/{name:.*}/archive"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.124441944Z" level=debug msg="Registering GET, /containers/json"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.124501144Z" level=debug msg="Registering GET, /containers/{name:.*}/export"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.124604844Z" level=debug msg="Registering GET, /containers/{name:.*}/changes"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.124791243Z" level=debug msg="Registering GET, /containers/{name:.*}/json"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.124977442Z" level=debug msg="Registering GET, /containers/{name:.*}/top"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.125076142Z" level=debug msg="Registering GET, /containers/{name:.*}/logs"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.125156042Z" level=debug msg="Registering GET, /containers/{name:.*}/stats"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.125350741Z" level=debug msg="Registering GET, /containers/{name:.*}/attach/ws"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.125443441Z" level=debug msg="Registering GET, /exec/{id:.*}/json"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.125553541Z" level=debug msg="Registering GET, /containers/{name:.*}/archive"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.125673140Z" level=debug msg="Registering POST, /containers/create"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.125763140Z" level=debug msg="Registering POST, /containers/{name:.*}/kill"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.125949140Z" level=debug msg="Registering POST, /containers/{name:.*}/pause"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.126101739Z" level=debug msg="Registering POST, /containers/{name:.*}/unpause"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.126205839Z" level=debug msg="Registering POST, /containers/{name:.*}/restart"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.126302238Z" level=debug msg="Registering POST, /containers/{name:.*}/start"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.126491738Z" level=debug msg="Registering POST, /containers/{name:.*}/stop"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.126679537Z" level=debug msg="Registering POST, /containers/{name:.*}/wait"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.126800837Z" level=debug msg="Registering POST, /containers/{name:.*}/resize"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.127215036Z" level=debug msg="Registering POST, /containers/{name:.*}/attach"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.127435735Z" level=debug msg="Registering POST, /containers/{name:.*}/exec"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.127568635Z" level=debug msg="Registering POST, /exec/{name:.*}/start"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.127684034Z" level=debug msg="Registering POST, /exec/{name:.*}/resize"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.127863034Z" level=debug msg="Registering POST, /containers/{name:.*}/rename"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.134797613Z" level=debug msg="Registering POST, /containers/{name:.*}/update"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.135101012Z" level=debug msg="Registering POST, /containers/prune"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.135301512Z" level=debug msg="Registering POST, /commit"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.135472011Z" level=debug msg="Registering PUT, /containers/{name:.*}/archive"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.135708210Z" level=debug msg="Registering DELETE, /containers/{name:.*}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.136126509Z" level=debug msg="Registering GET, /images/json"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.136313609Z" level=debug msg="Registering GET, /images/search"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.136502108Z" level=debug msg="Registering GET, /images/get"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.136574208Z" level=debug msg="Registering GET, /images/{name:.*}/get"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.136826407Z" level=debug msg="Registering GET, /images/{name:.*}/history"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.137029006Z" level=debug msg="Registering GET, /images/{name:.*}/json"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.137209206Z" level=debug msg="Registering POST, /images/load"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.137328106Z" level=debug msg="Registering POST, /images/create"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.137465405Z" level=debug msg="Registering POST, /images/{name:.*}/push"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.137670805Z" level=debug msg="Registering POST, /images/{name:.*}/tag"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.137926704Z" level=debug msg="Registering POST, /images/prune"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.138047203Z" level=debug msg="Registering DELETE, /images/{name:.*}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.138157503Z" level=debug msg="Registering OPTIONS, /{anyroute:.*}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.138340003Z" level=debug msg="Registering GET, /_ping"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.138507302Z" level=debug msg="Registering HEAD, /_ping"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.138597402Z" level=debug msg="Registering GET, /events"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.138774301Z" level=debug msg="Registering GET, /info"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.138926801Z" level=debug msg="Registering GET, /version"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.139083400Z" level=debug msg="Registering GET, /system/df"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.139252200Z" level=debug msg="Registering POST, /auth"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.139391799Z" level=debug msg="Registering GET, /volumes"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.139505999Z" level=debug msg="Registering GET, /volumes/{name:.*}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.139694499Z" level=debug msg="Registering POST, /volumes/create"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.139930898Z" level=debug msg="Registering POST, /volumes/prune"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.140097497Z" level=debug msg="Registering PUT, /volumes/{name:.*}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.140277997Z" level=debug msg="Registering DELETE, /volumes/{name:.*}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.140483096Z" level=debug msg="Registering POST, /build"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.140678696Z" level=debug msg="Registering POST, /build/prune"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.142093991Z" level=debug msg="Registering POST, /build/cancel"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.142406690Z" level=debug msg="Registering POST, /session"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.142583290Z" level=debug msg="Registering POST, /swarm/init"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.142724789Z" level=debug msg="Registering POST, /swarm/join"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.142966789Z" level=debug msg="Registering POST, /swarm/leave"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.143189688Z" level=debug msg="Registering GET, /swarm"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.143323488Z" level=debug msg="Registering GET, /swarm/unlockkey"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.143489087Z" level=debug msg="Registering POST, /swarm/update"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.143708887Z" level=debug msg="Registering POST, /swarm/unlock"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.143936486Z" level=debug msg="Registering GET, /services"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.144123785Z" level=debug msg="Registering GET, /services/{id}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.144298785Z" level=debug msg="Registering POST, /services/create"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.144430884Z" level=debug msg="Registering POST, /services/{id}/update"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.145437081Z" level=debug msg="Registering DELETE, /services/{id}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.145862480Z" level=debug msg="Registering GET, /services/{id}/logs"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.146854977Z" level=debug msg="Registering GET, /nodes"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.146928277Z" level=debug msg="Registering GET, /nodes/{id}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.146995877Z" level=debug msg="Registering DELETE, /nodes/{id}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.147071477Z" level=debug msg="Registering POST, /nodes/{id}/update"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.147231376Z" level=debug msg="Registering GET, /tasks"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.147405676Z" level=debug msg="Registering GET, /tasks/{id}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.147628375Z" level=debug msg="Registering GET, /tasks/{id}/logs"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.147822074Z" level=debug msg="Registering GET, /secrets"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.147955274Z" level=debug msg="Registering POST, /secrets/create"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.148039274Z" level=debug msg="Registering DELETE, /secrets/{id}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.148196373Z" level=debug msg="Registering GET, /secrets/{id}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.148346373Z" level=debug msg="Registering POST, /secrets/{id}/update"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.148497272Z" level=debug msg="Registering GET, /configs"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.148643572Z" level=debug msg="Registering POST, /configs/create"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.150693266Z" level=debug msg="Registering DELETE, /configs/{id}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.150919365Z" level=debug msg="Registering GET, /configs/{id}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.151087965Z" level=debug msg="Registering POST, /configs/{id}/update"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.151261764Z" level=debug msg="Registering GET, /plugins"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.151426064Z" level=debug msg="Registering GET, /plugins/{name:.*}/json"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.151506563Z" level=debug msg="Registering GET, /plugins/privileges"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.151650163Z" level=debug msg="Registering DELETE, /plugins/{name:.*}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.151899962Z" level=debug msg="Registering POST, /plugins/{name:.*}/enable"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152016162Z" level=debug msg="Registering POST, /plugins/{name:.*}/disable"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152104762Z" level=debug msg="Registering POST, /plugins/pull"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152185461Z" level=debug msg="Registering POST, /plugins/{name:.*}/push"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152274661Z" level=debug msg="Registering POST, /plugins/{name:.*}/upgrade"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152345261Z" level=debug msg="Registering POST, /plugins/{name:.*}/set"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152431861Z" level=debug msg="Registering POST, /plugins/create"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152514860Z" level=debug msg="Registering GET, /distribution/{name:.*}/json"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152650560Z" level=debug msg="Registering POST, /grpc"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152713360Z" level=debug msg="Registering GET, /networks"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152797259Z" level=debug msg="Registering GET, /networks/"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152874559Z" level=debug msg="Registering GET, /networks/{id:.+}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152965159Z" level=debug msg="Registering POST, /networks/create"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.153082859Z" level=debug msg="Registering POST, /networks/{id:.*}/connect"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.153204858Z" level=debug msg="Registering POST, /networks/{id:.*}/disconnect"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.153320858Z" level=debug msg="Registering POST, /networks/prune"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.153429158Z" level=debug msg="Registering DELETE, /networks/{id:.*}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.159533239Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 20 02:35:05 docker-flags-302200 systemd[1]: Started Docker Application Container Engine.
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.159897038Z" level=info msg="API listen on [::]:2376"
	Apr 20 02:35:35 docker-flags-302200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 20 02:35:35 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:35.998164545Z" level=info msg="Processing signal 'terminated'"
	Apr 20 02:35:36 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:35.999425346Z" level=debug msg="daemon configured with a 15 seconds minimum shutdown timeout"
	Apr 20 02:35:36 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:35.999498046Z" level=debug msg="start clean shutdown of all containers with a 15 seconds timeout..."
	Apr 20 02:35:36 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:36.000019147Z" level=debug msg="Unix socket /var/run/docker/libnetwork/af4db610cfaf.sock was closed. The external key listener will stop."
	Apr 20 02:35:36 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:36.000177347Z" level=debug msg="Cleaning up old mountid : start."
	Apr 20 02:35:36 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:36.000658848Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 20 02:35:36 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:36.000866748Z" level=debug msg="Cleaning up old mountid : done."
	Apr 20 02:35:36 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:36.001328249Z" level=debug msg="Clean shutdown succeeded"
	Apr 20 02:35:36 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:36.001887949Z" level=info msg="Daemon shutdown complete"
	Apr 20 02:35:36 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:36.002130750Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 20 02:35:36 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:36.002363950Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 20 02:35:36 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:36.002791650Z" level=debug msg="received signal" signal=terminated
	Apr 20 02:35:36 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:36.002979851Z" level=debug msg="sd notification" notified=false state="STOPPING=1"
	Apr 20 02:35:37 docker-flags-302200 systemd[1]: docker.service: Deactivated successfully.
	Apr 20 02:35:37 docker-flags-302200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 20 02:35:37 docker-flags-302200 systemd[1]: Starting Docker Application Container Engine...
	Apr 20 02:35:37 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:37.076231052Z" level=info msg="Starting up"
	Apr 20 02:35:37 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:37.077128553Z" level=debug msg="Listener created for HTTP on tcp (0.0.0.0:2376)"
	Apr 20 02:35:37 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:37.077330253Z" level=debug msg="Listener created for HTTP on unix (/var/run/docker.sock)"
	Apr 20 02:35:37 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:37.095778575Z" level=debug msg="Golang's threads limit set to 11970"
	Apr 20 02:35:37 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:37.096728076Z" level=debug msg="metrics API listening on /var/run/docker/metrics.sock"
	Apr 20 02:35:37 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:37.098973279Z" level=debug msg="2024/04/20 02:35:37 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:35:38 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:38.100154893Z" level=debug msg="2024/04/20 02:35:38 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:35:39 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:39.462865345Z" level=debug msg="2024/04/20 02:35:39 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:35:42 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:42.210773776Z" level=debug msg="2024/04/20 02:35:42 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:35:45 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:45.765292684Z" level=debug msg="2024/04/20 02:35:45 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:35:48 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:48.291806847Z" level=debug msg="2024/04/20 02:35:48 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:35:51 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:51.779416074Z" level=debug msg="2024/04/20 02:35:51 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:35:54 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:54.868564819Z" level=debug msg="2024/04/20 02:35:54 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:35:58 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:58.193756921Z" level=debug msg="2024/04/20 02:35:58 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:01 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:01.682243242Z" level=debug msg="2024/04/20 02:36:01 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:05 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:05.150324786Z" level=debug msg="2024/04/20 02:36:05 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:07 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:07.856014003Z" level=debug msg="2024/04/20 02:36:07 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:10 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:10.587702163Z" level=debug msg="2024/04/20 02:36:10 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:13 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:13.303317388Z" level=debug msg="2024/04/20 02:36:13 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:15 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:15.712254399Z" level=debug msg="2024/04/20 02:36:15 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:19 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:19.149778152Z" level=debug msg="2024/04/20 02:36:19 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:22 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:22.747788433Z" level=debug msg="2024/04/20 02:36:22 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:25 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:25.438292961Z" level=debug msg="2024/04/20 02:36:25 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:28 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:28.811802473Z" level=debug msg="2024/04/20 02:36:28 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:31 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:31.344970390Z" level=debug msg="2024/04/20 02:36:31 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:34 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:34.285387525Z" level=debug msg="2024/04/20 02:36:34 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:37 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:37.098765574Z" level=debug msg="Cleaning up old mountid : start."
	Apr 20 02:36:37 docker-flags-302200 dockerd[1015]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 20 02:36:37 docker-flags-302200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 20 02:36:37 docker-flags-302200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 20 02:36:37 docker-flags-302200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 20 02:35:03 docker-flags-302200 systemd[1]: Starting Docker Application Container Engine...
	Apr 20 02:35:03 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:03.586595811Z" level=info msg="Starting up"
	Apr 20 02:35:03 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:03.588000224Z" level=debug msg="Listener created for HTTP on tcp (0.0.0.0:2376)"
	Apr 20 02:35:03 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:03.588230926Z" level=debug msg="Listener created for HTTP on unix (/var/run/docker.sock)"
	Apr 20 02:35:03 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:03.588304726Z" level=info msg="containerd not running, starting managed containerd"
	Apr 20 02:35:03 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:03.591979258Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=664
	Apr 20 02:35:03 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:03.592556663Z" level=debug msg="created containerd monitoring client" address=/var/run/docker/containerd/containerd.sock module=libcontainerd
	Apr 20 02:35:03 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:03.592936966Z" level=debug msg="2024/04/20 02:35:03 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/var/run/docker/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /var/run/docker/containerd/containerd.sock: connect: no such file or directory\"" library=grpc
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.628468775Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.655666611Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.655822212Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.655910613Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.656031314Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.656146415Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.656251016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.656497418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.656607519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.656632119Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.656645519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.656894222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.657353126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.660367552Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.660476053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.660670354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.660693555Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.660872356Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.660964757Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.660981257Z" level=info msg="metadata content store policy set" policy=shared
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.684665363Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.684912165Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.685033566Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.685064566Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.685083366Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.685262268Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.685866173Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686027974Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686050675Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686066775Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686083075Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686122775Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686137575Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686171876Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686190476Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686206076Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686221476Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686259876Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686278077Z" level=debug msg="No blockio config file specified, blockio not configured"
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686286777Z" level=debug msg="No RDT config file specified, RDT not configured"
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686307477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686326177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686348577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686364777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686391978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686409378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686423778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686466478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686483778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686501879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686515579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686530179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686544379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686577879Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686618080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686652080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686794181Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686897982Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686951582Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686968383Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.686983083Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.687153184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.687291885Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.687330786Z" level=info msg="NRI interface is disabled by configuration."
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.687783690Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.687961391Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.688010192Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.688083792Z" level=debug msg="sd notification" notified=false state="READY=1"
	Apr 20 02:35:03 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:03.688103692Z" level=info msg="containerd successfully booted in 0.061259s"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.615267073Z" level=debug msg="Golang's threads limit set to 11970"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.616655284Z" level=debug msg="metrics API listening on /var/run/docker/metrics.sock"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.624965552Z" level=debug msg="Using default logging driver json-file"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.625244254Z" level=debug msg="processing event stream" module=libcontainerd namespace=plugins.moby
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.625368955Z" level=debug msg="No quota support for local volumes in /var/lib/docker/volumes: Filesystem does not support, or has not enabled quotas"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.664598275Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.678517688Z" level=debug msg="successfully detected metacopy status" storage-driver=overlay2 usingMetacopy=false
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.687336060Z" level=debug msg="backingFs=extfs, projectQuotaSupported=false, usingMetacopy=false, indexOff=\"index=off,\", userxattr=\"\"" storage-driver=overlay2
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.687542261Z" level=debug msg="Initialized graph driver overlay2"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.696406833Z" level=debug msg="Max Concurrent Downloads: 3"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.696464834Z" level=debug msg="Max Concurrent Uploads: 5"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.696488634Z" level=debug msg="Max Download Attempts: 5"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.696514334Z" level=info msg="Loading containers: start."
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.696575635Z" level=debug msg="Option DefaultDriver: bridge"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.696610735Z" level=debug msg="Option DefaultNetwork: bridge"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.696622035Z" level=debug msg="Network Control Plane MTU: 1500"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.696879437Z" level=debug msg="processing event stream" module=libcontainerd namespace=moby
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.719208619Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -j DOCKER-ISOLATION]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.721911541Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -D PREROUTING -m addrtype --dst-type LOCAL -j DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.733243233Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -D OUTPUT -m addrtype --dst-type LOCAL ! --dst 127.0.0.0/8 -j DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.737422367Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -D OUTPUT -m addrtype --dst-type LOCAL -j DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.741830203Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -D PREROUTING]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.744433524Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -D OUTPUT]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.748094454Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -F DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.750522774Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -X DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.752716792Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -F DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.754561307Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -X DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.756435122Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -F DOCKER-ISOLATION-STAGE-1]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.758509739Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -X DOCKER-ISOLATION-STAGE-1]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.760360154Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -F DOCKER-ISOLATION-STAGE-2]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.762082068Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -X DOCKER-ISOLATION-STAGE-2]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.764047684Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -F DOCKER-ISOLATION]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.765896999Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -X DOCKER-ISOLATION]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.767501412Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -n -L DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.769602329Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -N DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.772032149Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -n -L DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.774002465Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -N DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.776067482Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -n -L DOCKER-ISOLATION-STAGE-1]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.777913897Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -N DOCKER-ISOLATION-STAGE-1]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.779887713Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -n -L DOCKER-ISOLATION-STAGE-2]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.781720428Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -N DOCKER-ISOLATION-STAGE-2]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.783681944Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-1 -j RETURN]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.785871161Z" level=debug msg="/usr/sbin/iptables, [--wait -A DOCKER-ISOLATION-STAGE-1 -j RETURN]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.787809577Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-2 -j RETURN]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.789953195Z" level=debug msg="/usr/sbin/iptables, [--wait -A DOCKER-ISOLATION-STAGE-2 -j RETURN]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.836455973Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -n -L DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.838804492Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -N DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.840500706Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-USER -j RETURN]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.843398230Z" level=debug msg="/usr/sbin/iptables, [--wait -A DOCKER-USER -j RETURN]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.845799549Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -j DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.847850766Z" level=debug msg="/usr/sbin/iptables, [--wait -I FORWARD -j DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.866615218Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -n -L DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.869095639Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-USER -j RETURN]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.871370857Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -j DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.873485074Z" level=debug msg="/usr/sbin/iptables, [--wait -D FORWARD -j DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.875458590Z" level=debug msg="/usr/sbin/iptables, [--wait -I FORWARD -j DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.884098361Z" level=debug msg="Allocating IPv4 pools for network bridge (b29bff1d589b69d40bf875a8812adad208d9e86a8198d59028bc13809120afd2)"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.884186961Z" level=debug msg="RequestPool(LocalDefault, , , _, false)"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.884388763Z" level=debug msg="RequestAddress(LocalDefault/172.17.0.0/16, <nil>, map[RequestAddressType:com.docker.network.gateway])"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.884520564Z" level=debug msg="Request address PoolID:172.17.0.0/16 Bits: 65536, Unselected: 65534, Sequence: (0x80000000, 1)->(0x0, 2046)->(0x1, 1)->end Curr:0 Serial:false PrefAddress:invalid IP "
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.884586365Z" level=debug msg="Did not find any interface with name docker0: Link not found"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.884703666Z" level=debug msg="Setting bridge mac address to 02:42:1c:84:ae:ca"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.885456272Z" level=debug msg="Assigning address to bridge interface docker0: 172.17.0.1/16"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.885568273Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -C POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.892993733Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -I POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.895986057Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -C DOCKER -i docker0 -j RETURN]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.897510870Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -I DOCKER -i docker0 -j RETURN]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.899135283Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -C POSTROUTING -m addrtype --src-type LOCAL -o docker0 -j MASQUERADE]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.901015498Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -i docker0 -o docker0 -j DROP]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.903077415Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -i docker0 -o docker0 -j ACCEPT]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.905549935Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -I FORWARD -i docker0 -o docker0 -j ACCEPT]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.908037856Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -i docker0 ! -o docker0 -j ACCEPT]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.910361274Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -I FORWARD -i docker0 ! -o docker0 -j ACCEPT]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.912628093Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -C PREROUTING -m addrtype --dst-type LOCAL -j DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.914765810Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.917881638Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -C OUTPUT -m addrtype --dst-type LOCAL -j DOCKER ! --dst 127.0.0.0/8]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.920060559Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -A OUTPUT -m addrtype --dst-type LOCAL -j DOCKER ! --dst 127.0.0.0/8]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.922195379Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -o docker0 -j DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.925316808Z" level=debug msg="/usr/sbin/iptables, [--wait -I FORWARD -o docker0 -j DOCKER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.927144725Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.932210972Z" level=debug msg="/usr/sbin/iptables, [--wait -I FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.934597494Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -j DOCKER-ISOLATION-STAGE-1]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.936575013Z" level=debug msg="/usr/sbin/iptables, [--wait -I FORWARD -j DOCKER-ISOLATION-STAGE-1]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.938619532Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.940469349Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -I DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.942657370Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.944847090Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -I DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.978560305Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -n -L DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.981015428Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-USER -j RETURN]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.982956747Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -j DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.985150067Z" level=debug msg="/usr/sbin/iptables, [--wait -D FORWARD -j DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.987186986Z" level=debug msg="/usr/sbin/iptables, [--wait -I FORWARD -j DOCKER-USER]"
	Apr 20 02:35:04 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:04.989449707Z" level=info msg="Loading containers: done."
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.014356772Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.014620372Z" level=info msg="Daemon has completed initialization"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.123811346Z" level=debug msg="Registering routers"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.123934146Z" level=debug msg="Registering GET, /containers/{name:.*}/checkpoints"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.124062645Z" level=debug msg="Registering POST, /containers/{name:.*}/checkpoints"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.124147245Z" level=debug msg="Registering DELETE, /containers/{name}/checkpoints/{checkpoint}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.124281144Z" level=debug msg="Registering HEAD, /containers/{name:.*}/archive"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.124441944Z" level=debug msg="Registering GET, /containers/json"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.124501144Z" level=debug msg="Registering GET, /containers/{name:.*}/export"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.124604844Z" level=debug msg="Registering GET, /containers/{name:.*}/changes"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.124791243Z" level=debug msg="Registering GET, /containers/{name:.*}/json"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.124977442Z" level=debug msg="Registering GET, /containers/{name:.*}/top"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.125076142Z" level=debug msg="Registering GET, /containers/{name:.*}/logs"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.125156042Z" level=debug msg="Registering GET, /containers/{name:.*}/stats"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.125350741Z" level=debug msg="Registering GET, /containers/{name:.*}/attach/ws"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.125443441Z" level=debug msg="Registering GET, /exec/{id:.*}/json"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.125553541Z" level=debug msg="Registering GET, /containers/{name:.*}/archive"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.125673140Z" level=debug msg="Registering POST, /containers/create"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.125763140Z" level=debug msg="Registering POST, /containers/{name:.*}/kill"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.125949140Z" level=debug msg="Registering POST, /containers/{name:.*}/pause"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.126101739Z" level=debug msg="Registering POST, /containers/{name:.*}/unpause"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.126205839Z" level=debug msg="Registering POST, /containers/{name:.*}/restart"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.126302238Z" level=debug msg="Registering POST, /containers/{name:.*}/start"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.126491738Z" level=debug msg="Registering POST, /containers/{name:.*}/stop"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.126679537Z" level=debug msg="Registering POST, /containers/{name:.*}/wait"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.126800837Z" level=debug msg="Registering POST, /containers/{name:.*}/resize"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.127215036Z" level=debug msg="Registering POST, /containers/{name:.*}/attach"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.127435735Z" level=debug msg="Registering POST, /containers/{name:.*}/exec"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.127568635Z" level=debug msg="Registering POST, /exec/{name:.*}/start"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.127684034Z" level=debug msg="Registering POST, /exec/{name:.*}/resize"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.127863034Z" level=debug msg="Registering POST, /containers/{name:.*}/rename"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.134797613Z" level=debug msg="Registering POST, /containers/{name:.*}/update"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.135101012Z" level=debug msg="Registering POST, /containers/prune"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.135301512Z" level=debug msg="Registering POST, /commit"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.135472011Z" level=debug msg="Registering PUT, /containers/{name:.*}/archive"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.135708210Z" level=debug msg="Registering DELETE, /containers/{name:.*}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.136126509Z" level=debug msg="Registering GET, /images/json"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.136313609Z" level=debug msg="Registering GET, /images/search"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.136502108Z" level=debug msg="Registering GET, /images/get"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.136574208Z" level=debug msg="Registering GET, /images/{name:.*}/get"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.136826407Z" level=debug msg="Registering GET, /images/{name:.*}/history"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.137029006Z" level=debug msg="Registering GET, /images/{name:.*}/json"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.137209206Z" level=debug msg="Registering POST, /images/load"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.137328106Z" level=debug msg="Registering POST, /images/create"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.137465405Z" level=debug msg="Registering POST, /images/{name:.*}/push"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.137670805Z" level=debug msg="Registering POST, /images/{name:.*}/tag"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.137926704Z" level=debug msg="Registering POST, /images/prune"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.138047203Z" level=debug msg="Registering DELETE, /images/{name:.*}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.138157503Z" level=debug msg="Registering OPTIONS, /{anyroute:.*}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.138340003Z" level=debug msg="Registering GET, /_ping"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.138507302Z" level=debug msg="Registering HEAD, /_ping"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.138597402Z" level=debug msg="Registering GET, /events"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.138774301Z" level=debug msg="Registering GET, /info"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.138926801Z" level=debug msg="Registering GET, /version"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.139083400Z" level=debug msg="Registering GET, /system/df"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.139252200Z" level=debug msg="Registering POST, /auth"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.139391799Z" level=debug msg="Registering GET, /volumes"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.139505999Z" level=debug msg="Registering GET, /volumes/{name:.*}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.139694499Z" level=debug msg="Registering POST, /volumes/create"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.139930898Z" level=debug msg="Registering POST, /volumes/prune"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.140097497Z" level=debug msg="Registering PUT, /volumes/{name:.*}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.140277997Z" level=debug msg="Registering DELETE, /volumes/{name:.*}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.140483096Z" level=debug msg="Registering POST, /build"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.140678696Z" level=debug msg="Registering POST, /build/prune"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.142093991Z" level=debug msg="Registering POST, /build/cancel"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.142406690Z" level=debug msg="Registering POST, /session"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.142583290Z" level=debug msg="Registering POST, /swarm/init"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.142724789Z" level=debug msg="Registering POST, /swarm/join"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.142966789Z" level=debug msg="Registering POST, /swarm/leave"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.143189688Z" level=debug msg="Registering GET, /swarm"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.143323488Z" level=debug msg="Registering GET, /swarm/unlockkey"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.143489087Z" level=debug msg="Registering POST, /swarm/update"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.143708887Z" level=debug msg="Registering POST, /swarm/unlock"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.143936486Z" level=debug msg="Registering GET, /services"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.144123785Z" level=debug msg="Registering GET, /services/{id}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.144298785Z" level=debug msg="Registering POST, /services/create"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.144430884Z" level=debug msg="Registering POST, /services/{id}/update"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.145437081Z" level=debug msg="Registering DELETE, /services/{id}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.145862480Z" level=debug msg="Registering GET, /services/{id}/logs"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.146854977Z" level=debug msg="Registering GET, /nodes"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.146928277Z" level=debug msg="Registering GET, /nodes/{id}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.146995877Z" level=debug msg="Registering DELETE, /nodes/{id}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.147071477Z" level=debug msg="Registering POST, /nodes/{id}/update"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.147231376Z" level=debug msg="Registering GET, /tasks"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.147405676Z" level=debug msg="Registering GET, /tasks/{id}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.147628375Z" level=debug msg="Registering GET, /tasks/{id}/logs"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.147822074Z" level=debug msg="Registering GET, /secrets"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.147955274Z" level=debug msg="Registering POST, /secrets/create"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.148039274Z" level=debug msg="Registering DELETE, /secrets/{id}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.148196373Z" level=debug msg="Registering GET, /secrets/{id}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.148346373Z" level=debug msg="Registering POST, /secrets/{id}/update"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.148497272Z" level=debug msg="Registering GET, /configs"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.148643572Z" level=debug msg="Registering POST, /configs/create"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.150693266Z" level=debug msg="Registering DELETE, /configs/{id}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.150919365Z" level=debug msg="Registering GET, /configs/{id}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.151087965Z" level=debug msg="Registering POST, /configs/{id}/update"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.151261764Z" level=debug msg="Registering GET, /plugins"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.151426064Z" level=debug msg="Registering GET, /plugins/{name:.*}/json"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.151506563Z" level=debug msg="Registering GET, /plugins/privileges"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.151650163Z" level=debug msg="Registering DELETE, /plugins/{name:.*}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.151899962Z" level=debug msg="Registering POST, /plugins/{name:.*}/enable"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152016162Z" level=debug msg="Registering POST, /plugins/{name:.*}/disable"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152104762Z" level=debug msg="Registering POST, /plugins/pull"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152185461Z" level=debug msg="Registering POST, /plugins/{name:.*}/push"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152274661Z" level=debug msg="Registering POST, /plugins/{name:.*}/upgrade"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152345261Z" level=debug msg="Registering POST, /plugins/{name:.*}/set"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152431861Z" level=debug msg="Registering POST, /plugins/create"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152514860Z" level=debug msg="Registering GET, /distribution/{name:.*}/json"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152650560Z" level=debug msg="Registering POST, /grpc"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152713360Z" level=debug msg="Registering GET, /networks"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152797259Z" level=debug msg="Registering GET, /networks/"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152874559Z" level=debug msg="Registering GET, /networks/{id:.+}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.152965159Z" level=debug msg="Registering POST, /networks/create"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.153082859Z" level=debug msg="Registering POST, /networks/{id:.*}/connect"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.153204858Z" level=debug msg="Registering POST, /networks/{id:.*}/disconnect"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.153320858Z" level=debug msg="Registering POST, /networks/prune"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.153429158Z" level=debug msg="Registering DELETE, /networks/{id:.*}"
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.159533239Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 20 02:35:05 docker-flags-302200 systemd[1]: Started Docker Application Container Engine.
	Apr 20 02:35:05 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:05.159897038Z" level=info msg="API listen on [::]:2376"
	Apr 20 02:35:35 docker-flags-302200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 20 02:35:35 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:35.998164545Z" level=info msg="Processing signal 'terminated'"
	Apr 20 02:35:36 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:35.999425346Z" level=debug msg="daemon configured with a 15 seconds minimum shutdown timeout"
	Apr 20 02:35:36 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:35.999498046Z" level=debug msg="start clean shutdown of all containers with a 15 seconds timeout..."
	Apr 20 02:35:36 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:36.000019147Z" level=debug msg="Unix socket /var/run/docker/libnetwork/af4db610cfaf.sock was closed. The external key listener will stop."
	Apr 20 02:35:36 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:36.000177347Z" level=debug msg="Cleaning up old mountid : start."
	Apr 20 02:35:36 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:36.000658848Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 20 02:35:36 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:36.000866748Z" level=debug msg="Cleaning up old mountid : done."
	Apr 20 02:35:36 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:36.001328249Z" level=debug msg="Clean shutdown succeeded"
	Apr 20 02:35:36 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:36.001887949Z" level=info msg="Daemon shutdown complete"
	Apr 20 02:35:36 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:36.002130750Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 20 02:35:36 docker-flags-302200 dockerd[658]: time="2024-04-20T02:35:36.002363950Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 20 02:35:36 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:36.002791650Z" level=debug msg="received signal" signal=terminated
	Apr 20 02:35:36 docker-flags-302200 dockerd[664]: time="2024-04-20T02:35:36.002979851Z" level=debug msg="sd notification" notified=false state="STOPPING=1"
	Apr 20 02:35:37 docker-flags-302200 systemd[1]: docker.service: Deactivated successfully.
	Apr 20 02:35:37 docker-flags-302200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 20 02:35:37 docker-flags-302200 systemd[1]: Starting Docker Application Container Engine...
	Apr 20 02:35:37 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:37.076231052Z" level=info msg="Starting up"
	Apr 20 02:35:37 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:37.077128553Z" level=debug msg="Listener created for HTTP on tcp (0.0.0.0:2376)"
	Apr 20 02:35:37 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:37.077330253Z" level=debug msg="Listener created for HTTP on unix (/var/run/docker.sock)"
	Apr 20 02:35:37 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:37.095778575Z" level=debug msg="Golang's threads limit set to 11970"
	Apr 20 02:35:37 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:37.096728076Z" level=debug msg="metrics API listening on /var/run/docker/metrics.sock"
	Apr 20 02:35:37 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:37.098973279Z" level=debug msg="2024/04/20 02:35:37 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:35:38 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:38.100154893Z" level=debug msg="2024/04/20 02:35:38 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:35:39 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:39.462865345Z" level=debug msg="2024/04/20 02:35:39 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:35:42 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:42.210773776Z" level=debug msg="2024/04/20 02:35:42 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:35:45 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:45.765292684Z" level=debug msg="2024/04/20 02:35:45 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:35:48 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:48.291806847Z" level=debug msg="2024/04/20 02:35:48 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:35:51 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:51.779416074Z" level=debug msg="2024/04/20 02:35:51 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:35:54 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:54.868564819Z" level=debug msg="2024/04/20 02:35:54 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:35:58 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:35:58.193756921Z" level=debug msg="2024/04/20 02:35:58 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:01 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:01.682243242Z" level=debug msg="2024/04/20 02:36:01 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:05 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:05.150324786Z" level=debug msg="2024/04/20 02:36:05 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:07 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:07.856014003Z" level=debug msg="2024/04/20 02:36:07 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:10 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:10.587702163Z" level=debug msg="2024/04/20 02:36:10 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:13 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:13.303317388Z" level=debug msg="2024/04/20 02:36:13 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:15 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:15.712254399Z" level=debug msg="2024/04/20 02:36:15 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:19 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:19.149778152Z" level=debug msg="2024/04/20 02:36:19 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:22 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:22.747788433Z" level=debug msg="2024/04/20 02:36:22 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:25 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:25.438292961Z" level=debug msg="2024/04/20 02:36:25 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:28 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:28.811802473Z" level=debug msg="2024/04/20 02:36:28 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:31 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:31.344970390Z" level=debug msg="2024/04/20 02:36:31 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:34 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:34.285387525Z" level=debug msg="2024/04/20 02:36:34 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"/run/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" library=grpc
	Apr 20 02:36:37 docker-flags-302200 dockerd[1015]: time="2024-04-20T02:36:37.098765574Z" level=debug msg="Cleaning up old mountid : start."
	Apr 20 02:36:37 docker-flags-302200 dockerd[1015]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 20 02:36:37 docker-flags-302200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 20 02:36:37 docker-flags-302200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 20 02:36:37 docker-flags-302200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0419 19:36:37.254225   14472 out.go:239] * 
	* 
	W0419 19:36:37.255802   14472 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 19:36:37.260303   14472 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p docker-flags-302200 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv" : exit status 90
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-302200 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-302200 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.1083301s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-302200 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-302200 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (9.4269577s)
docker_test.go:76: *** TestDockerFlags FAILED at 2024-04-19 19:36:57.2374043 -0700 PDT m=+9485.731904501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p docker-flags-302200 -n docker-flags-302200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p docker-flags-302200 -n docker-flags-302200: exit status 6 (13.1525391s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 19:36:57.371173   10828 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0419 19:37:10.332333   10828 status.go:417] kubeconfig endpoint: get endpoint: "docker-flags-302200" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "docker-flags-302200" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "docker-flags-302200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-302200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-302200: (41.1414167s)
--- FAIL: TestDockerFlags (582.69s)

                                                
                                    
x
+
TestErrorSpam/setup (192.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-498400 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-498400 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 --driver=hyperv: (3m12.1114368s)
error_spam_test.go:96: unexpected stderr: "W0419 17:04:27.624671   14544 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-498400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
- KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
- MINIKUBE_LOCATION=18703
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-498400" primary control-plane node in "nospam-498400" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-498400" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0419 17:04:27.624671   14544 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (192.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (33.65s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-614300 -n functional-614300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-614300 -n functional-614300: (12.1136616s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 logs -n 25: (8.5316007s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-498400 --log_dir                                     | nospam-498400     | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:08 PDT | 19 Apr 24 17:08 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-498400 --log_dir                                     | nospam-498400     | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:08 PDT | 19 Apr 24 17:09 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-498400 --log_dir                                     | nospam-498400     | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:09 PDT | 19 Apr 24 17:09 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-498400 --log_dir                                     | nospam-498400     | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:09 PDT | 19 Apr 24 17:09 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-498400 --log_dir                                     | nospam-498400     | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:09 PDT | 19 Apr 24 17:09 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-498400 --log_dir                                     | nospam-498400     | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:09 PDT | 19 Apr 24 17:10 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-498400 --log_dir                                     | nospam-498400     | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:10 PDT | 19 Apr 24 17:10 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-498400                                            | nospam-498400     | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:10 PDT | 19 Apr 24 17:10 PDT |
	| start   | -p functional-614300                                        | functional-614300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:10 PDT | 19 Apr 24 17:13 PDT |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-614300                                        | functional-614300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:13 PDT | 19 Apr 24 17:16 PDT |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-614300 cache add                                 | functional-614300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:16 PDT | 19 Apr 24 17:16 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-614300 cache add                                 | functional-614300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:16 PDT | 19 Apr 24 17:16 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-614300 cache add                                 | functional-614300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:16 PDT | 19 Apr 24 17:16 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-614300 cache add                                 | functional-614300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:16 PDT | 19 Apr 24 17:16 PDT |
	|         | minikube-local-cache-test:functional-614300                 |                   |                   |         |                     |                     |
	| cache   | functional-614300 cache delete                              | functional-614300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:16 PDT | 19 Apr 24 17:16 PDT |
	|         | minikube-local-cache-test:functional-614300                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:16 PDT | 19 Apr 24 17:16 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:16 PDT | 19 Apr 24 17:16 PDT |
	| ssh     | functional-614300 ssh sudo                                  | functional-614300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:16 PDT | 19 Apr 24 17:16 PDT |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-614300                                           | functional-614300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:16 PDT | 19 Apr 24 17:16 PDT |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-614300 ssh                                       | functional-614300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:16 PDT |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-614300 cache reload                              | functional-614300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:17 PDT | 19 Apr 24 17:17 PDT |
	| ssh     | functional-614300 ssh                                       | functional-614300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:17 PDT | 19 Apr 24 17:17 PDT |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:17 PDT | 19 Apr 24 17:17 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:17 PDT | 19 Apr 24 17:17 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-614300 kubectl --                                | functional-614300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:17 PDT | 19 Apr 24 17:17 PDT |
	|         | --context functional-614300                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 17:13:54
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 17:13:54.845642    6268 out.go:291] Setting OutFile to fd 840 ...
	I0419 17:13:54.845642    6268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 17:13:54.845642    6268 out.go:304] Setting ErrFile to fd 856...
	I0419 17:13:54.845642    6268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 17:13:54.867939    6268 out.go:298] Setting JSON to false
	I0419 17:13:54.875678    6268 start.go:129] hostinfo: {"hostname":"minikube1","uptime":10493,"bootTime":1713561541,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0419 17:13:54.875678    6268 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 17:13:54.880244    6268 out.go:177] * [functional-614300] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0419 17:13:54.883473    6268 notify.go:220] Checking for updates...
	I0419 17:13:54.885726    6268 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 17:13:54.888781    6268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 17:13:54.891643    6268 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0419 17:13:54.894896    6268 out.go:177]   - MINIKUBE_LOCATION=18703
	I0419 17:13:54.900061    6268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 17:13:54.910350    6268 config.go:182] Loaded profile config "functional-614300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:13:54.910663    6268 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 17:14:00.075147    6268 out.go:177] * Using the hyperv driver based on existing profile
	I0419 17:14:00.078558    6268 start.go:297] selected driver: hyperv
	I0419 17:14:00.078558    6268 start.go:901] validating driver "hyperv" against &{Name:functional-614300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-614300
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.34.3 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 17:14:00.078788    6268 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 17:14:00.132474    6268 cni.go:84] Creating CNI manager for ""
	I0419 17:14:00.132474    6268 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 17:14:00.132474    6268 start.go:340] cluster config:
	{Name:functional-614300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-614300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.34.3 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 17:14:00.135786    6268 iso.go:125] acquiring lock: {Name:mk297f2abb67cbbcd36490c866afe693892d0c05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 17:14:00.138239    6268 out.go:177] * Starting "functional-614300" primary control-plane node in "functional-614300" cluster
	I0419 17:14:00.139625    6268 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 17:14:00.139625    6268 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0419 17:14:00.143080    6268 cache.go:56] Caching tarball of preloaded images
	I0419 17:14:00.143179    6268 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0419 17:14:00.143179    6268 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 17:14:00.143756    6268 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\config.json ...
	I0419 17:14:00.143959    6268 start.go:360] acquireMachinesLock for functional-614300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 17:14:00.143959    6268 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-614300"
	I0419 17:14:00.146352    6268 start.go:96] Skipping create...Using existing machine configuration
	I0419 17:14:00.146352    6268 fix.go:54] fixHost starting: 
	I0419 17:14:00.146500    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:14:02.770199    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:14:02.770199    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:02.770373    6268 fix.go:112] recreateIfNeeded on functional-614300: state=Running err=<nil>
	W0419 17:14:02.770373    6268 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 17:14:02.773792    6268 out.go:177] * Updating the running hyperv "functional-614300" VM ...
	I0419 17:14:02.777386    6268 machine.go:94] provisionDockerMachine start ...
	I0419 17:14:02.777386    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:14:04.847366    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:14:04.853309    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:04.856039    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
	I0419 17:14:07.308496    6268 main.go:141] libmachine: [stdout =====>] : 172.19.34.3
	
	I0419 17:14:07.308496    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:07.327069    6268 main.go:141] libmachine: Using SSH client type: native
	I0419 17:14:07.327817    6268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.34.3 22 <nil> <nil>}
	I0419 17:14:07.327817    6268 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 17:14:07.476639    6268 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-614300
	
	I0419 17:14:07.476731    6268 buildroot.go:166] provisioning hostname "functional-614300"
	I0419 17:14:07.476860    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:14:09.502866    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:14:09.502866    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:09.515091    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
	I0419 17:14:11.996002    6268 main.go:141] libmachine: [stdout =====>] : 172.19.34.3
	
	I0419 17:14:12.003838    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:12.010285    6268 main.go:141] libmachine: Using SSH client type: native
	I0419 17:14:12.010868    6268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.34.3 22 <nil> <nil>}
	I0419 17:14:12.010975    6268 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-614300 && echo "functional-614300" | sudo tee /etc/hostname
	I0419 17:14:12.177341    6268 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-614300
	
	I0419 17:14:12.177341    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:14:14.233482    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:14:14.233482    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:14.233482    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
	I0419 17:14:16.716930    6268 main.go:141] libmachine: [stdout =====>] : 172.19.34.3
	
	I0419 17:14:16.723511    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:16.731272    6268 main.go:141] libmachine: Using SSH client type: native
	I0419 17:14:16.732354    6268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.34.3 22 <nil> <nil>}
	I0419 17:14:16.732624    6268 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-614300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-614300/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-614300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 17:14:16.882646    6268 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 17:14:16.882646    6268 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0419 17:14:16.882646    6268 buildroot.go:174] setting up certificates
	I0419 17:14:16.882646    6268 provision.go:84] configureAuth start
	I0419 17:14:16.882646    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:14:18.950809    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:14:18.950809    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:18.962835    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
	I0419 17:14:21.456201    6268 main.go:141] libmachine: [stdout =====>] : 172.19.34.3
	
	I0419 17:14:21.456201    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:21.456201    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:14:23.497165    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:14:23.497165    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:23.497479    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
	I0419 17:14:25.993436    6268 main.go:141] libmachine: [stdout =====>] : 172.19.34.3
	
	I0419 17:14:25.993436    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:26.001447    6268 provision.go:143] copyHostCerts
	I0419 17:14:26.001638    6268 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0419 17:14:26.001687    6268 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0419 17:14:26.001687    6268 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0419 17:14:26.002299    6268 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0419 17:14:26.003168    6268 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0419 17:14:26.003168    6268 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0419 17:14:26.003715    6268 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0419 17:14:26.004090    6268 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0419 17:14:26.005215    6268 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0419 17:14:26.005551    6268 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0419 17:14:26.005551    6268 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0419 17:14:26.005620    6268 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0419 17:14:26.006903    6268 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-614300 san=[127.0.0.1 172.19.34.3 functional-614300 localhost minikube]
	I0419 17:14:26.230581    6268 provision.go:177] copyRemoteCerts
	I0419 17:14:26.247815    6268 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 17:14:26.247815    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:14:28.293617    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:14:28.293617    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:28.293875    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
	I0419 17:14:30.752759    6268 main.go:141] libmachine: [stdout =====>] : 172.19.34.3
	
	I0419 17:14:30.765158    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:30.765386    6268 sshutil.go:53] new ssh client: &{IP:172.19.34.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-614300\id_rsa Username:docker}
	I0419 17:14:30.879949    6268 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6321229s)
	I0419 17:14:30.880094    6268 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0419 17:14:30.880619    6268 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 17:14:30.928922    6268 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0419 17:14:30.929200    6268 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0419 17:14:30.977913    6268 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0419 17:14:30.978238    6268 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0419 17:14:31.028688    6268 provision.go:87] duration metric: took 14.1460076s to configureAuth
	I0419 17:14:31.028688    6268 buildroot.go:189] setting minikube options for container-runtime
	I0419 17:14:31.029414    6268 config.go:182] Loaded profile config "functional-614300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:14:31.029414    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:14:33.045329    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:14:33.050389    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:33.050555    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
	I0419 17:14:35.471581    6268 main.go:141] libmachine: [stdout =====>] : 172.19.34.3
	
	I0419 17:14:35.471581    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:35.490504    6268 main.go:141] libmachine: Using SSH client type: native
	I0419 17:14:35.491282    6268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.34.3 22 <nil> <nil>}
	I0419 17:14:35.491282    6268 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0419 17:14:35.632974    6268 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0419 17:14:35.632974    6268 buildroot.go:70] root file system type: tmpfs
	I0419 17:14:35.632974    6268 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0419 17:14:35.632974    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:14:37.665392    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:14:37.678378    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:37.678378    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
	I0419 17:14:40.142069    6268 main.go:141] libmachine: [stdout =====>] : 172.19.34.3
	
	I0419 17:14:40.154143    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:40.161575    6268 main.go:141] libmachine: Using SSH client type: native
	I0419 17:14:40.162085    6268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.34.3 22 <nil> <nil>}
	I0419 17:14:40.162161    6268 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0419 17:14:40.340249    6268 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0419 17:14:40.340249    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:14:42.330592    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:14:42.341985    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:42.341985    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
	I0419 17:14:44.761659    6268 main.go:141] libmachine: [stdout =====>] : 172.19.34.3
	
	I0419 17:14:44.761659    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:44.780601    6268 main.go:141] libmachine: Using SSH client type: native
	I0419 17:14:44.780601    6268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.34.3 22 <nil> <nil>}
	I0419 17:14:44.780601    6268 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0419 17:14:44.933920    6268 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 17:14:44.933920    6268 machine.go:97] duration metric: took 42.1564282s to provisionDockerMachine
	I0419 17:14:44.933920    6268 start.go:293] postStartSetup for "functional-614300" (driver="hyperv")
	I0419 17:14:44.933920    6268 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 17:14:44.948136    6268 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 17:14:44.948136    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:14:46.934601    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:14:46.947049    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:46.947049    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
	I0419 17:14:49.356375    6268 main.go:141] libmachine: [stdout =====>] : 172.19.34.3
	
	I0419 17:14:49.356375    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:49.368517    6268 sshutil.go:53] new ssh client: &{IP:172.19.34.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-614300\id_rsa Username:docker}
	I0419 17:14:49.476291    6268 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5280511s)
	I0419 17:14:49.490467    6268 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 17:14:49.499236    6268 command_runner.go:130] > NAME=Buildroot
	I0419 17:14:49.499357    6268 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0419 17:14:49.499357    6268 command_runner.go:130] > ID=buildroot
	I0419 17:14:49.499357    6268 command_runner.go:130] > VERSION_ID=2023.02.9
	I0419 17:14:49.499459    6268 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0419 17:14:49.499552    6268 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 17:14:49.499646    6268 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0419 17:14:49.500322    6268 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0419 17:14:49.501824    6268 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> 34162.pem in /etc/ssl/certs
	I0419 17:14:49.501824    6268 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /etc/ssl/certs/34162.pem
	I0419 17:14:49.503201    6268 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\3416\hosts -> hosts in /etc/test/nested/copy/3416
	I0419 17:14:49.503201    6268 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\3416\hosts -> /etc/test/nested/copy/3416/hosts
	I0419 17:14:49.513426    6268 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/3416
	I0419 17:14:49.535103    6268 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /etc/ssl/certs/34162.pem (1708 bytes)
	I0419 17:14:49.589367    6268 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\3416\hosts --> /etc/test/nested/copy/3416/hosts (40 bytes)
	I0419 17:14:49.636083    6268 start.go:296] duration metric: took 4.7021519s for postStartSetup
	I0419 17:14:49.636083    6268 fix.go:56] duration metric: took 49.4896077s for fixHost
	I0419 17:14:49.636083    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:14:51.672089    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:14:51.672089    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:51.684034    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
	I0419 17:14:54.110083    6268 main.go:141] libmachine: [stdout =====>] : 172.19.34.3
	
	I0419 17:14:54.116116    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:54.120687    6268 main.go:141] libmachine: Using SSH client type: native
	I0419 17:14:54.121532    6268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.34.3 22 <nil> <nil>}
	I0419 17:14:54.121532    6268 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 17:14:54.262125    6268 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713572094.267576638
	
	I0419 17:14:54.262125    6268 fix.go:216] guest clock: 1713572094.267576638
	I0419 17:14:54.262659    6268 fix.go:229] Guest: 2024-04-19 17:14:54.267576638 -0700 PDT Remote: 2024-04-19 17:14:49.6360836 -0700 PDT m=+54.898239801 (delta=4.631493038s)
	I0419 17:14:54.262795    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:14:56.257692    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:14:56.257692    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:56.262569    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
	I0419 17:14:58.692126    6268 main.go:141] libmachine: [stdout =====>] : 172.19.34.3
	
	I0419 17:14:58.692126    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:14:58.711236    6268 main.go:141] libmachine: Using SSH client type: native
	I0419 17:14:58.711236    6268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.34.3 22 <nil> <nil>}
	I0419 17:14:58.711236    6268 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713572094
	I0419 17:14:58.876881    6268 main.go:141] libmachine: SSH cmd err, output: <nil>: Sat Apr 20 00:14:54 UTC 2024
	
	I0419 17:14:58.876942    6268 fix.go:236] clock set: Sat Apr 20 00:14:54 UTC 2024
	 (err=<nil>)
	I0419 17:14:58.877012    6268 start.go:83] releasing machines lock for "functional-614300", held for 58.7306799s
	I0419 17:14:58.877255    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:15:00.886153    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:15:00.886153    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:15:00.898479    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
	I0419 17:15:03.434424    6268 main.go:141] libmachine: [stdout =====>] : 172.19.34.3
	
	I0419 17:15:03.434424    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:15:03.439072    6268 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 17:15:03.439284    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:15:03.456994    6268 ssh_runner.go:195] Run: cat /version.json
	I0419 17:15:03.457109    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:15:05.530228    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:15:05.530228    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:15:05.530228    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:15:05.530228    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:15:05.530228    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
	I0419 17:15:05.530228    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
	I0419 17:15:08.155164    6268 main.go:141] libmachine: [stdout =====>] : 172.19.34.3
	
	I0419 17:15:08.155277    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:15:08.155565    6268 sshutil.go:53] new ssh client: &{IP:172.19.34.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-614300\id_rsa Username:docker}
	I0419 17:15:08.183168    6268 main.go:141] libmachine: [stdout =====>] : 172.19.34.3
	
	I0419 17:15:08.183168    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:15:08.190409    6268 sshutil.go:53] new ssh client: &{IP:172.19.34.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-614300\id_rsa Username:docker}
	I0419 17:15:08.258808    6268 command_runner.go:130] > {"iso_version": "v1.33.0", "kicbase_version": "v0.0.43-1713236840-18649", "minikube_version": "v1.33.0", "commit": "4bd203f0c710e7fdd30539846cf2bc6624a2556d"}
	I0419 17:15:08.258935    6268 ssh_runner.go:235] Completed: cat /version.json: (4.8019288s)
	I0419 17:15:08.271567    6268 ssh_runner.go:195] Run: systemctl --version
	I0419 17:15:08.333066    6268 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0419 17:15:08.334062    6268 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8948995s)
	I0419 17:15:08.334062    6268 command_runner.go:130] > systemd 252 (252)
	I0419 17:15:08.334190    6268 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0419 17:15:08.348519    6268 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0419 17:15:08.356823    6268 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0419 17:15:08.358133    6268 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 17:15:08.373353    6268 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 17:15:08.392653    6268 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0419 17:15:08.392653    6268 start.go:494] detecting cgroup driver to use...
	I0419 17:15:08.393178    6268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 17:15:08.428909    6268 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0419 17:15:08.443571    6268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0419 17:15:08.478339    6268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0419 17:15:08.504768    6268 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0419 17:15:08.519565    6268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0419 17:15:08.550196    6268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 17:15:08.590224    6268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0419 17:15:08.627185    6268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 17:15:08.659952    6268 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 17:15:08.697142    6268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0419 17:15:08.739383    6268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0419 17:15:08.775509    6268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0419 17:15:08.814844    6268 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 17:15:08.835961    6268 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0419 17:15:08.851248    6268 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 17:15:08.884651    6268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:15:09.172859    6268 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0419 17:15:09.207905    6268 start.go:494] detecting cgroup driver to use...
	I0419 17:15:09.220805    6268 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0419 17:15:09.250327    6268 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0419 17:15:09.250439    6268 command_runner.go:130] > [Unit]
	I0419 17:15:09.250638    6268 command_runner.go:130] > Description=Docker Application Container Engine
	I0419 17:15:09.250668    6268 command_runner.go:130] > Documentation=https://docs.docker.com
	I0419 17:15:09.250668    6268 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0419 17:15:09.250697    6268 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0419 17:15:09.250697    6268 command_runner.go:130] > StartLimitBurst=3
	I0419 17:15:09.250697    6268 command_runner.go:130] > StartLimitIntervalSec=60
	I0419 17:15:09.250697    6268 command_runner.go:130] > [Service]
	I0419 17:15:09.250697    6268 command_runner.go:130] > Type=notify
	I0419 17:15:09.250697    6268 command_runner.go:130] > Restart=on-failure
	I0419 17:15:09.250697    6268 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0419 17:15:09.250697    6268 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0419 17:15:09.250697    6268 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0419 17:15:09.250697    6268 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0419 17:15:09.250697    6268 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0419 17:15:09.250697    6268 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0419 17:15:09.250697    6268 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0419 17:15:09.250697    6268 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0419 17:15:09.250697    6268 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0419 17:15:09.250697    6268 command_runner.go:130] > ExecStart=
	I0419 17:15:09.250697    6268 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0419 17:15:09.250697    6268 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0419 17:15:09.250697    6268 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0419 17:15:09.250697    6268 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0419 17:15:09.250697    6268 command_runner.go:130] > LimitNOFILE=infinity
	I0419 17:15:09.250697    6268 command_runner.go:130] > LimitNPROC=infinity
	I0419 17:15:09.250697    6268 command_runner.go:130] > LimitCORE=infinity
	I0419 17:15:09.250697    6268 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0419 17:15:09.250697    6268 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0419 17:15:09.250697    6268 command_runner.go:130] > TasksMax=infinity
	I0419 17:15:09.250697    6268 command_runner.go:130] > TimeoutStartSec=0
	I0419 17:15:09.250697    6268 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0419 17:15:09.250697    6268 command_runner.go:130] > Delegate=yes
	I0419 17:15:09.250697    6268 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0419 17:15:09.250697    6268 command_runner.go:130] > KillMode=process
	I0419 17:15:09.250697    6268 command_runner.go:130] > [Install]
	I0419 17:15:09.250697    6268 command_runner.go:130] > WantedBy=multi-user.target
	I0419 17:15:09.264338    6268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 17:15:09.298142    6268 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 17:15:09.356227    6268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 17:15:09.401032    6268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 17:15:09.426681    6268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 17:15:09.468489    6268 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0419 17:15:09.482103    6268 ssh_runner.go:195] Run: which cri-dockerd
	I0419 17:15:09.490967    6268 command_runner.go:130] > /usr/bin/cri-dockerd
	I0419 17:15:09.504625    6268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0419 17:15:09.526244    6268 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0419 17:15:09.575240    6268 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0419 17:15:09.861494    6268 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0419 17:15:10.116779    6268 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0419 17:15:10.116854    6268 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0419 17:15:10.167371    6268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:15:10.423288    6268 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 17:15:23.411822    6268 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.9885016s)
	I0419 17:15:23.427033    6268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0419 17:15:23.476541    6268 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0419 17:15:23.542833    6268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 17:15:23.586360    6268 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0419 17:15:23.832030    6268 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0419 17:15:24.052755    6268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:15:24.281829    6268 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0419 17:15:24.342505    6268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 17:15:24.382706    6268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:15:24.614337    6268 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0419 17:15:24.746266    6268 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0419 17:15:24.767509    6268 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0419 17:15:24.777173    6268 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0419 17:15:24.777214    6268 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0419 17:15:24.777214    6268 command_runner.go:130] > Device: 0,22	Inode: 1505        Links: 1
	I0419 17:15:24.777214    6268 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0419 17:15:24.777214    6268 command_runner.go:130] > Access: 2024-04-20 00:15:24.647304757 +0000
	I0419 17:15:24.777214    6268 command_runner.go:130] > Modify: 2024-04-20 00:15:24.647304757 +0000
	I0419 17:15:24.777214    6268 command_runner.go:130] > Change: 2024-04-20 00:15:24.653304951 +0000
	I0419 17:15:24.777214    6268 command_runner.go:130] >  Birth: -
	I0419 17:15:24.777319    6268 start.go:562] Will wait 60s for crictl version
	I0419 17:15:24.791794    6268 ssh_runner.go:195] Run: which crictl
	I0419 17:15:24.800056    6268 command_runner.go:130] > /usr/bin/crictl
	I0419 17:15:24.814326    6268 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 17:15:24.878732    6268 command_runner.go:130] > Version:  0.1.0
	I0419 17:15:24.878778    6268 command_runner.go:130] > RuntimeName:  docker
	I0419 17:15:24.878778    6268 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0419 17:15:24.878778    6268 command_runner.go:130] > RuntimeApiVersion:  v1
	I0419 17:15:24.878778    6268 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0419 17:15:24.890931    6268 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 17:15:24.922470    6268 command_runner.go:130] > 26.0.1
	I0419 17:15:24.936419    6268 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 17:15:24.962820    6268 command_runner.go:130] > 26.0.1
	I0419 17:15:24.969429    6268 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0419 17:15:24.969616    6268 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0419 17:15:24.975524    6268 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0419 17:15:24.975524    6268 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0419 17:15:24.975747    6268 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0419 17:15:24.975747    6268 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8c:b9:25 Flags:up|broadcast|multicast|running}
	I0419 17:15:24.978218    6268 ip.go:210] interface addr: fe80::ce04:318e:a1d8:4460/64
	I0419 17:15:24.979300    6268 ip.go:210] interface addr: 172.19.32.1/20
	I0419 17:15:24.990204    6268 ssh_runner.go:195] Run: grep 172.19.32.1	host.minikube.internal$ /etc/hosts
	I0419 17:15:24.992562    6268 command_runner.go:130] > 172.19.32.1	host.minikube.internal
	I0419 17:15:24.999261    6268 kubeadm.go:877] updating cluster {Name:functional-614300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-614300 Namespace:defaul
t APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.34.3 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 17:15:24.999261    6268 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 17:15:25.000451    6268 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0419 17:15:25.031053    6268 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0419 17:15:25.031131    6268 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0419 17:15:25.031131    6268 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0419 17:15:25.031131    6268 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0419 17:15:25.031163    6268 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0419 17:15:25.031163    6268 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0419 17:15:25.031163    6268 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0419 17:15:25.031202    6268 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 17:15:25.031381    6268 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0419 17:15:25.031426    6268 docker.go:615] Images already preloaded, skipping extraction
	I0419 17:15:25.043318    6268 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0419 17:15:25.070062    6268 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0419 17:15:25.070062    6268 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0419 17:15:25.070160    6268 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0419 17:15:25.070160    6268 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0419 17:15:25.070160    6268 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0419 17:15:25.070160    6268 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0419 17:15:25.070160    6268 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0419 17:15:25.070160    6268 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 17:15:25.070229    6268 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0419 17:15:25.070229    6268 cache_images.go:84] Images are preloaded, skipping loading
	I0419 17:15:25.070229    6268 kubeadm.go:928] updating node { 172.19.34.3 8441 v1.30.0 docker true true} ...
	I0419 17:15:25.070229    6268 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-614300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.34.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:functional-614300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 17:15:25.080991    6268 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0419 17:15:25.119015    6268 command_runner.go:130] > cgroupfs
	I0419 17:15:25.120055    6268 cni.go:84] Creating CNI manager for ""
	I0419 17:15:25.120147    6268 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 17:15:25.120223    6268 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 17:15:25.120223    6268 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.34.3 APIServerPort:8441 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-614300 NodeName:functional-614300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.34.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.34.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 17:15:25.120566    6268 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.34.3
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-614300"
	  kubeletExtraArgs:
	    node-ip: 172.19.34.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.34.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 17:15:25.132347    6268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 17:15:25.152725    6268 command_runner.go:130] > kubeadm
	I0419 17:15:25.152725    6268 command_runner.go:130] > kubectl
	I0419 17:15:25.152725    6268 command_runner.go:130] > kubelet
	I0419 17:15:25.152725    6268 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 17:15:25.167600    6268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0419 17:15:25.186800    6268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0419 17:15:25.227676    6268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 17:15:25.252654    6268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0419 17:15:25.307485    6268 ssh_runner.go:195] Run: grep 172.19.34.3	control-plane.minikube.internal$ /etc/hosts
	I0419 17:15:25.310276    6268 command_runner.go:130] > 172.19.34.3	control-plane.minikube.internal
	I0419 17:15:25.330359    6268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:15:25.573027    6268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 17:15:25.608341    6268 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300 for IP: 172.19.34.3
	I0419 17:15:25.608341    6268 certs.go:194] generating shared ca certs ...
	I0419 17:15:25.608341    6268 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:15:25.609488    6268 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0419 17:15:25.609948    6268 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0419 17:15:25.609948    6268 certs.go:256] generating profile certs ...
	I0419 17:15:25.610859    6268 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.key
	I0419 17:15:25.611267    6268 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\apiserver.key.9e56659e
	I0419 17:15:25.612052    6268 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\proxy-client.key
	I0419 17:15:25.612131    6268 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 17:15:25.612375    6268 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0419 17:15:25.612450    6268 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 17:15:25.612450    6268 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 17:15:25.612450    6268 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0419 17:15:25.613285    6268 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0419 17:15:25.613681    6268 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0419 17:15:25.613743    6268 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0419 17:15:25.614474    6268 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem (1338 bytes)
	W0419 17:15:25.614640    6268 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416_empty.pem, impossibly tiny 0 bytes
	I0419 17:15:25.614640    6268 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0419 17:15:25.615226    6268 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0419 17:15:25.615373    6268 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0419 17:15:25.615373    6268 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0419 17:15:25.616007    6268 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem (1708 bytes)
	I0419 17:15:25.616951    6268 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem -> /usr/share/ca-certificates/3416.pem
	I0419 17:15:25.617029    6268 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /usr/share/ca-certificates/34162.pem
	I0419 17:15:25.617029    6268 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:15:25.618559    6268 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 17:15:25.693973    6268 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 17:15:25.780059    6268 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 17:15:25.853823    6268 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 17:15:25.933441    6268 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0419 17:15:25.992238    6268 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0419 17:15:26.065031    6268 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 17:15:26.131402    6268 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0419 17:15:26.173746    6268 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem --> /usr/share/ca-certificates/3416.pem (1338 bytes)
	I0419 17:15:26.235669    6268 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /usr/share/ca-certificates/34162.pem (1708 bytes)
	I0419 17:15:26.316983    6268 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 17:15:26.371878    6268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0419 17:15:26.430799    6268 ssh_runner.go:195] Run: openssl version
	I0419 17:15:26.446921    6268 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0419 17:15:26.463825    6268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3416.pem && ln -fs /usr/share/ca-certificates/3416.pem /etc/ssl/certs/3416.pem"
	I0419 17:15:26.498330    6268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3416.pem
	I0419 17:15:26.503932    6268 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 20 00:10 /usr/share/ca-certificates/3416.pem
	I0419 17:15:26.503932    6268 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:10 /usr/share/ca-certificates/3416.pem
	I0419 17:15:26.517331    6268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3416.pem
	I0419 17:15:26.534098    6268 command_runner.go:130] > 51391683
	I0419 17:15:26.549085    6268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3416.pem /etc/ssl/certs/51391683.0"
	I0419 17:15:26.586701    6268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34162.pem && ln -fs /usr/share/ca-certificates/34162.pem /etc/ssl/certs/34162.pem"
	I0419 17:15:26.624259    6268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34162.pem
	I0419 17:15:26.633927    6268 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 20 00:10 /usr/share/ca-certificates/34162.pem
	I0419 17:15:26.633927    6268 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:10 /usr/share/ca-certificates/34162.pem
	I0419 17:15:26.648321    6268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34162.pem
	I0419 17:15:26.662049    6268 command_runner.go:130] > 3ec20f2e
	I0419 17:15:26.676480    6268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34162.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 17:15:26.715886    6268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 17:15:26.756314    6268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:15:26.787596    6268 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 20 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:15:26.787596    6268 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 20 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:15:26.801455    6268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:15:26.812800    6268 command_runner.go:130] > b5213941
	I0419 17:15:26.827155    6268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 17:15:26.865757    6268 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 17:15:26.879757    6268 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 17:15:26.879757    6268 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0419 17:15:26.879757    6268 command_runner.go:130] > Device: 8,1	Inode: 6290258     Links: 1
	I0419 17:15:26.879757    6268 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0419 17:15:26.879757    6268 command_runner.go:130] > Access: 2024-04-20 00:13:13.893599175 +0000
	I0419 17:15:26.879757    6268 command_runner.go:130] > Modify: 2024-04-20 00:13:13.893599175 +0000
	I0419 17:15:26.879757    6268 command_runner.go:130] > Change: 2024-04-20 00:13:13.893599175 +0000
	I0419 17:15:26.879757    6268 command_runner.go:130] >  Birth: 2024-04-20 00:13:13.893599175 +0000
	I0419 17:15:26.900287    6268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0419 17:15:26.912004    6268 command_runner.go:130] > Certificate will not expire
	I0419 17:15:26.930605    6268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0419 17:15:26.942407    6268 command_runner.go:130] > Certificate will not expire
	I0419 17:15:26.954973    6268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0419 17:15:26.973861    6268 command_runner.go:130] > Certificate will not expire
	I0419 17:15:26.986400    6268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0419 17:15:26.999283    6268 command_runner.go:130] > Certificate will not expire
	I0419 17:15:27.016297    6268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0419 17:15:27.026255    6268 command_runner.go:130] > Certificate will not expire
	I0419 17:15:27.044603    6268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0419 17:15:27.057276    6268 command_runner.go:130] > Certificate will not expire
	I0419 17:15:27.061382    6268 kubeadm.go:391] StartCluster: {Name:functional-614300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-614300 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.34.3 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 17:15:27.073167    6268 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0419 17:15:27.146551    6268 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0419 17:15:27.169936    6268 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0419 17:15:27.169936    6268 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0419 17:15:27.169936    6268 command_runner.go:130] > /var/lib/minikube/etcd:
	I0419 17:15:27.169936    6268 command_runner.go:130] > member
	W0419 17:15:27.169936    6268 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0419 17:15:27.169936    6268 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0419 17:15:27.169936    6268 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0419 17:15:27.183805    6268 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0419 17:15:27.198761    6268 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0419 17:15:27.205225    6268 kubeconfig.go:125] found "functional-614300" server: "https://172.19.34.3:8441"
	I0419 17:15:27.206137    6268 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 17:15:27.206942    6268 kapi.go:59] client config for functional-614300: &rest.Config{Host:"https://172.19.34.3:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-614300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-614300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c35620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 17:15:27.208438    6268 cert_rotation.go:137] Starting client certificate rotation controller
	I0419 17:15:27.223405    6268 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0419 17:15:27.244160    6268 kubeadm.go:624] The running cluster does not require reconfiguration: 172.19.34.3
	I0419 17:15:27.244160    6268 kubeadm.go:1154] stopping kube-system containers ...
	I0419 17:15:27.255992    6268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0419 17:15:27.306926    6268 command_runner.go:130] > 27cc42670952
	I0419 17:15:27.307479    6268 command_runner.go:130] > 95cdd3967491
	I0419 17:15:27.307479    6268 command_runner.go:130] > 360bf6a69c98
	I0419 17:15:27.307479    6268 command_runner.go:130] > 9eee3dd7d42a
	I0419 17:15:27.307479    6268 command_runner.go:130] > a9371c28aa09
	I0419 17:15:27.307479    6268 command_runner.go:130] > 4bca19607c83
	I0419 17:15:27.307479    6268 command_runner.go:130] > ec335cd22217
	I0419 17:15:27.307479    6268 command_runner.go:130] > 8b0dca2a4dce
	I0419 17:15:27.307479    6268 command_runner.go:130] > fe1cb30f2314
	I0419 17:15:27.307479    6268 command_runner.go:130] > 4d1665880464
	I0419 17:15:27.307479    6268 command_runner.go:130] > e2240658f350
	I0419 17:15:27.307479    6268 command_runner.go:130] > 368af9bc4234
	I0419 17:15:27.307479    6268 command_runner.go:130] > 9b26cfd60c2a
	I0419 17:15:27.307479    6268 command_runner.go:130] > 94c6ef3b6667
	I0419 17:15:27.307479    6268 command_runner.go:130] > e16317823a43
	I0419 17:15:27.307479    6268 command_runner.go:130] > cc9a8bfb1f0c
	I0419 17:15:27.307479    6268 command_runner.go:130] > 3f4c689e9898
	I0419 17:15:27.307479    6268 command_runner.go:130] > 2ba56ad38325
	I0419 17:15:27.307479    6268 command_runner.go:130] > 82859e5a1f65
	I0419 17:15:27.307479    6268 command_runner.go:130] > 9f4442a727b5
	I0419 17:15:27.307479    6268 command_runner.go:130] > 75c291047d2c
	I0419 17:15:27.307479    6268 command_runner.go:130] > adc08684ed21
	I0419 17:15:27.307479    6268 command_runner.go:130] > 178fd2b70d8c
	I0419 17:15:27.307479    6268 command_runner.go:130] > f5852ef00493
	I0419 17:15:27.307479    6268 command_runner.go:130] > 4ff7647a38c7
	I0419 17:15:27.308476    6268 docker.go:483] Stopping containers: [27cc42670952 95cdd3967491 360bf6a69c98 9eee3dd7d42a a9371c28aa09 4bca19607c83 ec335cd22217 8b0dca2a4dce fe1cb30f2314 4d1665880464 e2240658f350 368af9bc4234 9b26cfd60c2a 94c6ef3b6667 e16317823a43 cc9a8bfb1f0c 3f4c689e9898 2ba56ad38325 82859e5a1f65 9f4442a727b5 75c291047d2c adc08684ed21 178fd2b70d8c f5852ef00493 4ff7647a38c7]
	I0419 17:15:27.320521    6268 ssh_runner.go:195] Run: docker stop 27cc42670952 95cdd3967491 360bf6a69c98 9eee3dd7d42a a9371c28aa09 4bca19607c83 ec335cd22217 8b0dca2a4dce fe1cb30f2314 4d1665880464 e2240658f350 368af9bc4234 9b26cfd60c2a 94c6ef3b6667 e16317823a43 cc9a8bfb1f0c 3f4c689e9898 2ba56ad38325 82859e5a1f65 9f4442a727b5 75c291047d2c adc08684ed21 178fd2b70d8c f5852ef00493 4ff7647a38c7
	I0419 17:15:28.209945    6268 command_runner.go:130] > 27cc42670952
	I0419 17:15:28.211021    6268 command_runner.go:130] > 95cdd3967491
	I0419 17:15:28.211021    6268 command_runner.go:130] > 360bf6a69c98
	I0419 17:15:28.211021    6268 command_runner.go:130] > 9eee3dd7d42a
	I0419 17:15:28.211021    6268 command_runner.go:130] > a9371c28aa09
	I0419 17:15:28.211021    6268 command_runner.go:130] > 4bca19607c83
	I0419 17:15:28.211021    6268 command_runner.go:130] > ec335cd22217
	I0419 17:15:28.211021    6268 command_runner.go:130] > 8b0dca2a4dce
	I0419 17:15:28.211021    6268 command_runner.go:130] > fe1cb30f2314
	I0419 17:15:28.211021    6268 command_runner.go:130] > 4d1665880464
	I0419 17:15:28.211021    6268 command_runner.go:130] > e2240658f350
	I0419 17:15:28.211021    6268 command_runner.go:130] > 368af9bc4234
	I0419 17:15:28.211234    6268 command_runner.go:130] > 9b26cfd60c2a
	I0419 17:15:28.211361    6268 command_runner.go:130] > 94c6ef3b6667
	I0419 17:15:28.211361    6268 command_runner.go:130] > e16317823a43
	I0419 17:15:28.211361    6268 command_runner.go:130] > cc9a8bfb1f0c
	I0419 17:15:28.211361    6268 command_runner.go:130] > 3f4c689e9898
	I0419 17:15:28.211361    6268 command_runner.go:130] > 2ba56ad38325
	I0419 17:15:28.211361    6268 command_runner.go:130] > 82859e5a1f65
	I0419 17:15:28.211432    6268 command_runner.go:130] > 9f4442a727b5
	I0419 17:15:28.211451    6268 command_runner.go:130] > 75c291047d2c
	I0419 17:15:28.211451    6268 command_runner.go:130] > adc08684ed21
	I0419 17:15:28.211451    6268 command_runner.go:130] > 178fd2b70d8c
	I0419 17:15:28.211451    6268 command_runner.go:130] > f5852ef00493
	I0419 17:15:28.211451    6268 command_runner.go:130] > 4ff7647a38c7
	I0419 17:15:28.226329    6268 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0419 17:15:28.312096    6268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0419 17:15:28.332367    6268 command_runner.go:130] > -rw------- 1 root root 5647 Apr 20 00:13 /etc/kubernetes/admin.conf
	I0419 17:15:28.332675    6268 command_runner.go:130] > -rw------- 1 root root 5655 Apr 20 00:13 /etc/kubernetes/controller-manager.conf
	I0419 17:15:28.332675    6268 command_runner.go:130] > -rw------- 1 root root 2007 Apr 20 00:13 /etc/kubernetes/kubelet.conf
	I0419 17:15:28.332675    6268 command_runner.go:130] > -rw------- 1 root root 5603 Apr 20 00:13 /etc/kubernetes/scheduler.conf
	I0419 17:15:28.332675    6268 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5647 Apr 20 00:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5655 Apr 20 00:13 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Apr 20 00:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5603 Apr 20 00:13 /etc/kubernetes/scheduler.conf
	
	I0419 17:15:28.345695    6268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0419 17:15:28.353527    6268 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0419 17:15:28.377291    6268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0419 17:15:28.399281    6268 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0419 17:15:28.412437    6268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0419 17:15:28.423767    6268 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0419 17:15:28.445975    6268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0419 17:15:28.479335    6268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0419 17:15:28.500193    6268 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0419 17:15:28.514301    6268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0419 17:15:28.546152    6268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0419 17:15:28.563691    6268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 17:15:28.630257    6268 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0419 17:15:28.636366    6268 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0419 17:15:28.637241    6268 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0419 17:15:28.637587    6268 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0419 17:15:28.637587    6268 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0419 17:15:28.637587    6268 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0419 17:15:28.637587    6268 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0419 17:15:28.637587    6268 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0419 17:15:28.637587    6268 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0419 17:15:28.637587    6268 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0419 17:15:28.637587    6268 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0419 17:15:28.648417    6268 command_runner.go:130] > [certs] Using the existing "sa" key
	I0419 17:15:28.649087    6268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 17:15:28.722492    6268 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0419 17:15:28.812660    6268 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0419 17:15:29.258545    6268 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0419 17:15:29.345803    6268 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0419 17:15:29.449946    6268 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0419 17:15:29.561326    6268 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0419 17:15:29.574603    6268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0419 17:15:29.894667    6268 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 17:15:29.894735    6268 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 17:15:29.894735    6268 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0419 17:15:29.894803    6268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 17:15:29.968928    6268 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0419 17:15:29.968928    6268 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0419 17:15:29.978904    6268 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0419 17:15:29.979861    6268 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0419 17:15:29.995041    6268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0419 17:15:30.158885    6268 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0419 17:15:30.159033    6268 api_server.go:52] waiting for apiserver process to appear ...
	I0419 17:15:30.174282    6268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 17:15:30.684400    6268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 17:15:31.189543    6268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 17:15:31.683356    6268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 17:15:32.183633    6268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 17:15:32.206916    6268 command_runner.go:130] > 5772
	I0419 17:15:32.208981    6268 api_server.go:72] duration metric: took 2.0500906s to wait for apiserver process to appear ...
	I0419 17:15:32.209026    6268 api_server.go:88] waiting for apiserver healthz status ...
	I0419 17:15:32.209100    6268 api_server.go:253] Checking apiserver healthz at https://172.19.34.3:8441/healthz ...
	I0419 17:15:35.330596    6268 api_server.go:279] https://172.19.34.3:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0419 17:15:35.330596    6268 api_server.go:103] status: https://172.19.34.3:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0419 17:15:35.334527    6268 api_server.go:253] Checking apiserver healthz at https://172.19.34.3:8441/healthz ...
	I0419 17:15:35.365183    6268 api_server.go:279] https://172.19.34.3:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0419 17:15:35.368953    6268 api_server.go:103] status: https://172.19.34.3:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0419 17:15:35.719253    6268 api_server.go:253] Checking apiserver healthz at https://172.19.34.3:8441/healthz ...
	I0419 17:15:35.725967    6268 api_server.go:279] https://172.19.34.3:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0419 17:15:35.728177    6268 api_server.go:103] status: https://172.19.34.3:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0419 17:15:36.213379    6268 api_server.go:253] Checking apiserver healthz at https://172.19.34.3:8441/healthz ...
	I0419 17:15:36.221007    6268 api_server.go:279] https://172.19.34.3:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0419 17:15:36.221007    6268 api_server.go:103] status: https://172.19.34.3:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0419 17:15:36.722998    6268 api_server.go:253] Checking apiserver healthz at https://172.19.34.3:8441/healthz ...
	I0419 17:15:36.735933    6268 api_server.go:279] https://172.19.34.3:8441/healthz returned 200:
	ok
	I0419 17:15:36.738678    6268 round_trippers.go:463] GET https://172.19.34.3:8441/version
	I0419 17:15:36.738714    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:36.738746    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:36.738746    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:36.752149    6268 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0419 17:15:36.752215    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:36.752215    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:36.752215    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:36.752252    6268 round_trippers.go:580]     Content-Length: 263
	I0419 17:15:36.752252    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:36 GMT
	I0419 17:15:36.752252    6268 round_trippers.go:580]     Audit-Id: 50a65ab1-5320-458c-a8c3-f2ec8575813d
	I0419 17:15:36.752252    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:36.752252    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:36.752252    6268 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0419 17:15:36.752252    6268 api_server.go:141] control plane version: v1.30.0
	I0419 17:15:36.752252    6268 api_server.go:131] duration metric: took 4.5431835s to wait for apiserver health ...
	I0419 17:15:36.752252    6268 cni.go:84] Creating CNI manager for ""
	I0419 17:15:36.752252    6268 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 17:15:36.756047    6268 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0419 17:15:36.769372    6268 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0419 17:15:36.796836    6268 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0419 17:15:36.832352    6268 system_pods.go:43] waiting for kube-system pods to appear ...
	I0419 17:15:36.832352    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods
	I0419 17:15:36.832352    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:36.832352    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:36.832352    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:36.833133    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:36.840922    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:36.840957    6268 round_trippers.go:580]     Audit-Id: 4957b066-1ea9-4260-aa85-819cc6ef00ff
	I0419 17:15:36.840957    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:36.841016    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:36.841061    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:36.841101    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:36.841101    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:36 GMT
	I0419 17:15:36.841873    6268 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"565"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"560","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52077 chars]
	I0419 17:15:36.847053    6268 system_pods.go:59] 7 kube-system pods found
	I0419 17:15:36.847189    6268 system_pods.go:61] "coredns-7db6d8ff4d-b25zx" [fd0e7b75-307a-47c4-9f4f-a24534fc157e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0419 17:15:36.847234    6268 system_pods.go:61] "etcd-functional-614300" [dac02a21-acdc-4e45-8b20-1f96f98862fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0419 17:15:36.847267    6268 system_pods.go:61] "kube-apiserver-functional-614300" [6f4cb4ed-ce0c-4230-bf83-202649a788bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0419 17:15:36.847302    6268 system_pods.go:61] "kube-controller-manager-functional-614300" [622f8cac-6843-48b3-bb2e-0cdec34d13e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0419 17:15:36.847302    6268 system_pods.go:61] "kube-proxy-lrzcm" [9e920e7a-025c-40cd-8100-e279d31a6a36] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0419 17:15:36.847336    6268 system_pods.go:61] "kube-scheduler-functional-614300" [e53ec63d-4823-4689-9039-8eee1c8f8549] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0419 17:15:36.847336    6268 system_pods.go:61] "storage-provisioner" [04f6a541-81e8-4d8a-bab2-51e0112a9d5c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0419 17:15:36.847378    6268 system_pods.go:74] duration metric: took 15.0257ms to wait for pod list to return data ...
	I0419 17:15:36.847378    6268 node_conditions.go:102] verifying NodePressure condition ...
	I0419 17:15:36.847448    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes
	I0419 17:15:36.847448    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:36.847448    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:36.847448    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:36.850294    6268 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:15:36.850294    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:36.851708    6268 round_trippers.go:580]     Audit-Id: 519446d8-71ed-4db2-b1b9-28092d1cc680
	I0419 17:15:36.851708    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:36.851766    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:36.851824    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:36.851824    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:36.851824    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:36 GMT
	I0419 17:15:36.851993    6268 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"565"},"items":[{"metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4838 chars]
	I0419 17:15:36.853057    6268 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 17:15:36.853128    6268 node_conditions.go:123] node cpu capacity is 2
	I0419 17:15:36.853198    6268 node_conditions.go:105] duration metric: took 5.7799ms to run NodePressure ...
	I0419 17:15:36.853253    6268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 17:15:37.306770    6268 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0419 17:15:37.306883    6268 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0419 17:15:37.306883    6268 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0419 17:15:37.306883    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0419 17:15:37.306883    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:37.306883    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:37.306883    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:37.307681    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:37.307681    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:37.307681    6268 round_trippers.go:580]     Audit-Id: 7da63245-a95b-4dc3-9a8f-924fbc9c2059
	I0419 17:15:37.307681    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:37.307681    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:37.307681    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:37.307681    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:37.307681    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:37 GMT
	I0419 17:15:37.312123    6268 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"571"},"items":[{"metadata":{"name":"etcd-functional-614300","namespace":"kube-system","uid":"dac02a21-acdc-4e45-8b20-1f96f98862fb","resourceVersion":"556","creationTimestamp":"2024-04-20T00:13:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.34.3:2379","kubernetes.io/config.hash":"dd430d3fee522dd0b056f45fed60855c","kubernetes.io/config.mirror":"dd430d3fee522dd0b056f45fed60855c","kubernetes.io/config.seen":"2024-04-20T00:13:25.764687006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 31393 chars]
	I0419 17:15:37.313878    6268 kubeadm.go:733] kubelet initialised
	I0419 17:15:37.313878    6268 kubeadm.go:734] duration metric: took 6.9957ms waiting for restarted kubelet to initialise ...
	I0419 17:15:37.313878    6268 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 17:15:37.313878    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods
	I0419 17:15:37.313878    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:37.313878    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:37.313878    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:37.320193    6268 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:15:37.320193    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:37.322753    6268 round_trippers.go:580]     Audit-Id: e27e79c5-c9a3-4f6c-8f1c-04cfbc529e53
	I0419 17:15:37.322753    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:37.322753    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:37.322811    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:37.322811    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:37.322811    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:37 GMT
	I0419 17:15:37.323935    6268 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"571"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"560","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52077 chars]
	I0419 17:15:37.325610    6268 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-b25zx" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:37.326412    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b25zx
	I0419 17:15:37.326453    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:37.326453    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:37.326511    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:37.353733    6268 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0419 17:15:37.353733    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:37.359212    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:37.359310    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:37 GMT
	I0419 17:15:37.359310    6268 round_trippers.go:580]     Audit-Id: c090e470-8129-4fbe-a3f6-9c7e7c3695e1
	I0419 17:15:37.359479    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:37.359479    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:37.359479    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:37.359788    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"560","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6499 chars]
	I0419 17:15:37.360764    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:37.360838    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:37.360838    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:37.360838    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:37.369524    6268 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 17:15:37.371033    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:37.371033    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:37.371033    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:37.371033    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:37.371033    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:37.371033    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:37 GMT
	I0419 17:15:37.371033    6268 round_trippers.go:580]     Audit-Id: e306e4df-3933-4a7e-9a2b-895a201c7ca9
	I0419 17:15:37.371033    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:37.835832    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b25zx
	I0419 17:15:37.835931    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:37.835931    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:37.835963    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:37.854699    6268 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0419 17:15:37.854770    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:37.854770    6268 round_trippers.go:580]     Audit-Id: 961c1846-4fee-4402-b8d8-b347e22c2b2f
	I0419 17:15:37.854770    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:37.854770    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:37.854770    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:37.854770    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:37.854770    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:37 GMT
	I0419 17:15:37.855040    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"560","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6499 chars]
	I0419 17:15:37.855814    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:37.855814    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:37.855814    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:37.855814    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:37.876763    6268 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0419 17:15:37.876763    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:37.876852    6268 round_trippers.go:580]     Audit-Id: 9c432ecc-26ec-4b86-9318-677cc4587220
	I0419 17:15:37.876852    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:37.876852    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:37.876852    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:37.876852    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:37.876852    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:37 GMT
	I0419 17:15:37.877164    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:38.331656    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b25zx
	I0419 17:15:38.331763    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:38.331763    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:38.331853    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:38.332529    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:38.335988    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:38.335988    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:38.335988    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:38 GMT
	I0419 17:15:38.335988    6268 round_trippers.go:580]     Audit-Id: 4a338da1-1a43-4856-81e3-30259de4e52c
	I0419 17:15:38.335988    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:38.335988    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:38.335988    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:38.336288    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"560","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6499 chars]
	I0419 17:15:38.337095    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:38.337167    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:38.337167    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:38.337167    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:38.344103    6268 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:15:38.344103    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:38.345199    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:38.345199    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:38.345199    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:38 GMT
	I0419 17:15:38.345256    6268 round_trippers.go:580]     Audit-Id: cb39fcfb-06d4-46e7-b598-dab2eadc2561
	I0419 17:15:38.345256    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:38.345256    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:38.345334    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:38.836585    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b25zx
	I0419 17:15:38.836660    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:38.836660    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:38.836660    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:38.837304    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:38.840935    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:38.840935    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:38.840935    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:38 GMT
	I0419 17:15:38.840935    6268 round_trippers.go:580]     Audit-Id: e4666fbe-68e9-4223-b919-5c714b952c89
	I0419 17:15:38.840935    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:38.840935    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:38.840935    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:38.841190    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"579","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6675 chars]
	I0419 17:15:38.841999    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:38.842086    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:38.842086    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:38.842086    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:38.846886    6268 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:15:38.846886    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:38.846886    6268 round_trippers.go:580]     Audit-Id: 58ed1341-272d-494b-9ab4-060b50680a3d
	I0419 17:15:38.846886    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:38.846886    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:38.846886    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:38.846886    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:38.846886    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:38 GMT
	I0419 17:15:38.847634    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:39.329868    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b25zx
	I0419 17:15:39.329945    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:39.329945    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:39.329945    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:39.334057    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:39.334113    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:39.334113    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:39.334113    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:39.334113    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:39.334113    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:39 GMT
	I0419 17:15:39.334113    6268 round_trippers.go:580]     Audit-Id: 6e57764b-b067-46bc-a000-270d112ca93d
	I0419 17:15:39.334113    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:39.334113    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"579","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6675 chars]
	I0419 17:15:39.334721    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:39.334721    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:39.334721    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:39.334721    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:39.335417    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:39.335417    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:39.335417    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:39.335417    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:39.343635    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:39 GMT
	I0419 17:15:39.343635    6268 round_trippers.go:580]     Audit-Id: d55ae9e3-1963-475b-9604-d9ba897e01bc
	I0419 17:15:39.343635    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:39.343635    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:39.343893    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:39.344597    6268 pod_ready.go:102] pod "coredns-7db6d8ff4d-b25zx" in "kube-system" namespace has status "Ready":"False"
	I0419 17:15:39.841420    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b25zx
	I0419 17:15:39.841489    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:39.841489    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:39.841549    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:39.845689    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:39.845689    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:39.845689    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:39.845689    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:39.845689    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:39 GMT
	I0419 17:15:39.845796    6268 round_trippers.go:580]     Audit-Id: b293cdfc-1ffc-4161-90b6-6159327ffb0e
	I0419 17:15:39.845796    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:39.845796    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:39.846022    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"579","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6675 chars]
	I0419 17:15:39.846916    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:39.846975    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:39.846975    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:39.846975    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:39.850521    6268 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:15:39.851426    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:39.851426    6268 round_trippers.go:580]     Audit-Id: 0a26adb3-b241-4636-bc79-16a2462f2148
	I0419 17:15:39.851426    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:39.851426    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:39.851426    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:39.851507    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:39.851507    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:39 GMT
	I0419 17:15:39.851779    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:40.338293    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b25zx
	I0419 17:15:40.338369    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:40.338369    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:40.338369    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:40.338686    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:40.342733    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:40.342733    6268 round_trippers.go:580]     Audit-Id: 44df91a4-2daf-40b4-b806-03c88772cb5e
	I0419 17:15:40.342733    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:40.342733    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:40.342830    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:40.342830    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:40.342830    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:40 GMT
	I0419 17:15:40.343463    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"579","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6675 chars]
	I0419 17:15:40.344158    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:40.344230    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:40.344230    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:40.344230    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:40.344431    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:40.344431    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:40.344431    6268 round_trippers.go:580]     Audit-Id: 6b36a451-562a-4848-81f0-435fb96bd816
	I0419 17:15:40.344431    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:40.344431    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:40.346734    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:40.346734    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:40.346734    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:40 GMT
	I0419 17:15:40.347018    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:40.834351    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b25zx
	I0419 17:15:40.834351    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:40.834351    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:40.834351    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:40.834882    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:40.839044    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:40.839044    6268 round_trippers.go:580]     Audit-Id: ff156d32-b2a1-4bce-969c-ef0077b24390
	I0419 17:15:40.839044    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:40.839044    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:40.839044    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:40.839044    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:40.839044    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:40 GMT
	I0419 17:15:40.839044    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"579","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6675 chars]
	I0419 17:15:40.839888    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:40.839888    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:40.839888    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:40.839888    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:40.840417    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:40.840417    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:40.842871    6268 round_trippers.go:580]     Audit-Id: 45136d3c-74d9-4198-aa06-d70a8ded1b2e
	I0419 17:15:40.842871    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:40.842871    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:40.842871    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:40.842999    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:40.842999    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:40 GMT
	I0419 17:15:40.843439    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:41.337510    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b25zx
	I0419 17:15:41.337510    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:41.337510    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:41.337510    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:41.338084    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:41.342649    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:41.342649    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:41.342649    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:41.342649    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:41.342649    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:41 GMT
	I0419 17:15:41.342649    6268 round_trippers.go:580]     Audit-Id: 46e9f5db-2fef-4f91-8649-50d4f6e03859
	I0419 17:15:41.342649    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:41.342885    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"579","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6675 chars]
	I0419 17:15:41.343835    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:41.343891    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:41.343891    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:41.343891    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:41.344252    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:41.347377    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:41.347377    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:41.347452    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:41 GMT
	I0419 17:15:41.347452    6268 round_trippers.go:580]     Audit-Id: b321ab38-c542-413a-9f7d-4bd8bae988ba
	I0419 17:15:41.347452    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:41.347452    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:41.347452    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:41.347452    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:41.348264    6268 pod_ready.go:102] pod "coredns-7db6d8ff4d-b25zx" in "kube-system" namespace has status "Ready":"False"
	I0419 17:15:41.836383    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b25zx
	I0419 17:15:41.836383    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:41.836383    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:41.836383    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:41.837088    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:41.844262    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:41.844359    6268 round_trippers.go:580]     Audit-Id: 545ad228-23d6-4a2c-a81f-23a13971c4d0
	I0419 17:15:41.844359    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:41.844359    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:41.844359    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:41.844359    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:41.844448    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:41 GMT
	I0419 17:15:41.844680    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"579","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6675 chars]
	I0419 17:15:41.845469    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:41.845564    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:41.845564    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:41.845564    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:41.845781    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:41.845781    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:41.849500    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:41.849500    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:41.849500    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:41 GMT
	I0419 17:15:41.849500    6268 round_trippers.go:580]     Audit-Id: 4cd623a3-2b74-4fde-8573-b6f784e599f2
	I0419 17:15:41.849500    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:41.849500    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:41.851166    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:42.338981    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b25zx
	I0419 17:15:42.339232    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:42.339232    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:42.339232    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:42.344017    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:42.344017    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:42.344085    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:42.344085    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:42.344085    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:42.344085    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:42 GMT
	I0419 17:15:42.344147    6268 round_trippers.go:580]     Audit-Id: 376716a4-95f2-4061-98fb-638601ab6c9e
	I0419 17:15:42.344147    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:42.344363    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"579","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6675 chars]
	I0419 17:15:42.344861    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:42.344861    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:42.344861    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:42.344861    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:42.350206    6268 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:15:42.350206    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:42.350206    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:42.350206    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:42.350206    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:42 GMT
	I0419 17:15:42.350206    6268 round_trippers.go:580]     Audit-Id: c65eda68-4519-46df-b5d1-ceddd1c3e22a
	I0419 17:15:42.350206    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:42.350206    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:42.350747    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:42.837017    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b25zx
	I0419 17:15:42.837017    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:42.837017    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:42.837017    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:42.841407    6268 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:15:42.841407    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:42.841407    6268 round_trippers.go:580]     Audit-Id: d5f5612b-6d90-40e7-a058-431a6457a71f
	I0419 17:15:42.841407    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:42.841407    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:42.841407    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:42.841407    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:42.841407    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:42 GMT
	I0419 17:15:42.841407    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"579","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6675 chars]
	I0419 17:15:42.842366    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:42.842943    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:42.842943    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:42.842943    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:42.850314    6268 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 17:15:42.850370    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:42.850370    6268 round_trippers.go:580]     Audit-Id: 603d8fee-a030-4382-8a02-1591a041ce09
	I0419 17:15:42.850370    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:42.850425    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:42.850440    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:42.850440    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:42.850440    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:42 GMT
	I0419 17:15:42.852266    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:43.328464    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b25zx
	I0419 17:15:43.328464    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:43.328464    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:43.328464    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:43.335415    6268 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:15:43.335524    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:43.335524    6268 round_trippers.go:580]     Audit-Id: b3a9c694-092c-4852-8c8c-54652ea1267a
	I0419 17:15:43.335524    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:43.335524    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:43.335524    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:43.335524    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:43.335524    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:43 GMT
	I0419 17:15:43.335524    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"579","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6675 chars]
	I0419 17:15:43.336225    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:43.336225    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:43.336225    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:43.336225    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:43.339733    6268 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:15:43.339733    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:43.339733    6268 round_trippers.go:580]     Audit-Id: 259f3831-080c-423c-9ca1-e5c13544b5c2
	I0419 17:15:43.340587    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:43.340587    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:43.340587    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:43.340587    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:43.340587    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:43 GMT
	I0419 17:15:43.340827    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:43.834132    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b25zx
	I0419 17:15:43.834132    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:43.834216    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:43.834216    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:43.834591    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:43.834591    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:43.838483    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:43.838483    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:43 GMT
	I0419 17:15:43.838483    6268 round_trippers.go:580]     Audit-Id: e070ab12-a963-45de-ba17-41e2ffad5dba
	I0419 17:15:43.838483    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:43.838483    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:43.838483    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:43.838700    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"579","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6675 chars]
	I0419 17:15:43.839733    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:43.839733    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:43.839733    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:43.839733    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:43.843300    6268 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:15:43.843300    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:43.843300    6268 round_trippers.go:580]     Audit-Id: 820f7806-8561-4e51-81cd-127b7d8057fd
	I0419 17:15:43.843300    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:43.843300    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:43.843300    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:43.843300    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:43.843300    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:43 GMT
	I0419 17:15:43.843937    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:43.844270    6268 pod_ready.go:102] pod "coredns-7db6d8ff4d-b25zx" in "kube-system" namespace has status "Ready":"False"
	I0419 17:15:44.332121    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b25zx
	I0419 17:15:44.332121    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:44.332121    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:44.332121    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:44.332653    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:44.336565    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:44.336565    6268 round_trippers.go:580]     Audit-Id: c8312d1c-0576-4e33-ad5b-d7a4958b831c
	I0419 17:15:44.336565    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:44.336565    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:44.336565    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:44.336565    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:44.336565    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:44 GMT
	I0419 17:15:44.336813    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"579","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6675 chars]
	I0419 17:15:44.337379    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:44.337379    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:44.337379    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:44.337379    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:44.343646    6268 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:15:44.343646    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:44.343646    6268 round_trippers.go:580]     Audit-Id: 21505c08-4047-4b5c-9bcc-54c0438736b4
	I0419 17:15:44.343646    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:44.343646    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:44.343646    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:44.343646    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:44.343646    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:44 GMT
	I0419 17:15:44.343646    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:44.834272    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b25zx
	I0419 17:15:44.834272    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:44.834366    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:44.834366    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:44.841271    6268 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:15:44.841271    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:44.841271    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:44.841271    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:44 GMT
	I0419 17:15:44.841271    6268 round_trippers.go:580]     Audit-Id: 9da7a64c-8f58-4e3c-82d9-674cdf6d4909
	I0419 17:15:44.841271    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:44.841271    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:44.841271    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:44.841271    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"579","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6675 chars]
	I0419 17:15:44.841271    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:44.841271    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:44.841271    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:44.841271    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:44.844190    6268 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:15:44.844190    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:44.844190    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:44.844190    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:44.844190    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:44 GMT
	I0419 17:15:44.844190    6268 round_trippers.go:580]     Audit-Id: 76d78666-e688-471e-9d96-0ef6b6a8bbd5
	I0419 17:15:44.844190    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:44.844190    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:44.844190    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:45.335443    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b25zx
	I0419 17:15:45.335443    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:45.335443    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:45.335443    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:45.335996    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:45.339084    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:45.339084    6268 round_trippers.go:580]     Audit-Id: e8766b62-0109-4416-a4f8-360ab97dbb95
	I0419 17:15:45.339084    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:45.339084    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:45.339154    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:45.339154    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:45.339154    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:45 GMT
	I0419 17:15:45.341189    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"579","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6675 chars]
	I0419 17:15:45.341734    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:45.341734    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:45.341734    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:45.341734    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:45.344966    6268 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:15:45.344966    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:45.344966    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:45 GMT
	I0419 17:15:45.344966    6268 round_trippers.go:580]     Audit-Id: 6c306c5e-9a68-49b1-8832-6ee0dfc8eaa5
	I0419 17:15:45.344966    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:45.346853    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:45.346853    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:45.346853    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:45.346923    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:45.834268    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b25zx
	I0419 17:15:45.834372    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:45.834372    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:45.834372    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:45.834633    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:45.834633    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:45.834633    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:45.834633    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:45.834633    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:45.834633    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:45 GMT
	I0419 17:15:45.834633    6268 round_trippers.go:580]     Audit-Id: 3892a01d-04bc-4196-b004-aa3b4c19026d
	I0419 17:15:45.834633    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:45.838992    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"579","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6675 chars]
	I0419 17:15:45.839833    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:45.839908    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:45.839908    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:45.839908    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:45.842774    6268 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:15:45.843058    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:45.843175    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:45.843175    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:45 GMT
	I0419 17:15:45.843175    6268 round_trippers.go:580]     Audit-Id: 16e5a5ec-3731-4465-b28c-112259364fe5
	I0419 17:15:45.843175    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:45.843175    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:45.843175    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:45.843501    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:46.330773    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b25zx
	I0419 17:15:46.330773    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:46.330773    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:46.330773    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:46.331335    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:46.331335    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:46.331335    6268 round_trippers.go:580]     Audit-Id: cc4c8532-5a59-44ed-908e-b1ae6d765373
	I0419 17:15:46.335155    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:46.335155    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:46.335155    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:46.335155    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:46.335155    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:46 GMT
	I0419 17:15:46.335319    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"585","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6446 chars]
	I0419 17:15:46.336130    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:46.336206    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:46.336206    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:46.336206    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:46.336361    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:46.336361    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:46.336361    6268 round_trippers.go:580]     Audit-Id: 0ade2084-15e9-4108-839d-ff4d77e33eda
	I0419 17:15:46.336361    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:46.336361    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:46.336361    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:46.336361    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:46.336361    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:46 GMT
	I0419 17:15:46.339680    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:46.340134    6268 pod_ready.go:92] pod "coredns-7db6d8ff4d-b25zx" in "kube-system" namespace has status "Ready":"True"
	I0419 17:15:46.340213    6268 pod_ready.go:81] duration metric: took 9.0140066s for pod "coredns-7db6d8ff4d-b25zx" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:46.340213    6268 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-614300" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:46.340357    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/etcd-functional-614300
	I0419 17:15:46.340400    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:46.340400    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:46.340400    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:46.340572    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:46.342864    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:46.343000    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:46.343043    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:46.343043    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:46.343043    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:46 GMT
	I0419 17:15:46.343114    6268 round_trippers.go:580]     Audit-Id: 9e086eef-13bb-452e-ae57-5a33492aee4a
	I0419 17:15:46.343114    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:46.343114    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-614300","namespace":"kube-system","uid":"dac02a21-acdc-4e45-8b20-1f96f98862fb","resourceVersion":"556","creationTimestamp":"2024-04-20T00:13:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.34.3:2379","kubernetes.io/config.hash":"dd430d3fee522dd0b056f45fed60855c","kubernetes.io/config.mirror":"dd430d3fee522dd0b056f45fed60855c","kubernetes.io/config.seen":"2024-04-20T00:13:25.764687006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6571 chars]
	I0419 17:15:46.344152    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:46.344226    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:46.344226    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:46.344226    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:46.349876    6268 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:15:46.349876    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:46.349876    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:46.349876    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:46 GMT
	I0419 17:15:46.349876    6268 round_trippers.go:580]     Audit-Id: 91c7f6be-d38e-48c0-9e83-15f616110952
	I0419 17:15:46.349876    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:46.349876    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:46.349876    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:46.350447    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:46.846863    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/etcd-functional-614300
	I0419 17:15:46.846863    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:46.846863    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:46.846863    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:46.847405    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:46.851452    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:46.851525    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:46.851525    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:46.851525    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:46.851525    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:46 GMT
	I0419 17:15:46.851525    6268 round_trippers.go:580]     Audit-Id: 99c3576d-c0c3-47be-b6cd-41a0c464bede
	I0419 17:15:46.851525    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:46.851525    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-614300","namespace":"kube-system","uid":"dac02a21-acdc-4e45-8b20-1f96f98862fb","resourceVersion":"556","creationTimestamp":"2024-04-20T00:13:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.34.3:2379","kubernetes.io/config.hash":"dd430d3fee522dd0b056f45fed60855c","kubernetes.io/config.mirror":"dd430d3fee522dd0b056f45fed60855c","kubernetes.io/config.seen":"2024-04-20T00:13:25.764687006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6571 chars]
	I0419 17:15:46.852912    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:46.852912    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:46.852912    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:46.852912    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:46.858806    6268 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:15:46.861913    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:46.861913    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:46.862164    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:46.862243    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:46.862243    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:46.862353    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:46 GMT
	I0419 17:15:46.862353    6268 round_trippers.go:580]     Audit-Id: 7b15790c-a928-4487-8fc7-4a785a140381
	I0419 17:15:46.862353    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:47.355216    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/etcd-functional-614300
	I0419 17:15:47.355321    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:47.355321    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:47.355321    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:47.359387    6268 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:15:47.359459    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:47.359459    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:47.359459    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:47 GMT
	I0419 17:15:47.359459    6268 round_trippers.go:580]     Audit-Id: 43a72b0a-d900-41a5-b2bb-3007ee2a534f
	I0419 17:15:47.359533    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:47.359533    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:47.359533    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:47.359533    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-614300","namespace":"kube-system","uid":"dac02a21-acdc-4e45-8b20-1f96f98862fb","resourceVersion":"556","creationTimestamp":"2024-04-20T00:13:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.34.3:2379","kubernetes.io/config.hash":"dd430d3fee522dd0b056f45fed60855c","kubernetes.io/config.mirror":"dd430d3fee522dd0b056f45fed60855c","kubernetes.io/config.seen":"2024-04-20T00:13:25.764687006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6571 chars]
	I0419 17:15:47.360428    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:47.360504    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:47.360504    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:47.360504    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:47.365520    6268 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:15:47.365520    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:47.365520    6268 round_trippers.go:580]     Audit-Id: 075e3c10-11a2-4638-a666-db43e359f91c
	I0419 17:15:47.365520    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:47.365520    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:47.365520    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:47.365520    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:47.365520    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:47 GMT
	I0419 17:15:47.365520    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:47.844488    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/etcd-functional-614300
	I0419 17:15:47.844687    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:47.844687    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:47.844687    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:47.845372    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:47.845372    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:47.845372    6268 round_trippers.go:580]     Audit-Id: 8cfeace8-895e-4aa1-9496-649f8c8f6453
	I0419 17:15:47.845372    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:47.845372    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:47.845372    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:47.845372    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:47.848763    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:47 GMT
	I0419 17:15:47.848979    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-614300","namespace":"kube-system","uid":"dac02a21-acdc-4e45-8b20-1f96f98862fb","resourceVersion":"556","creationTimestamp":"2024-04-20T00:13:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.34.3:2379","kubernetes.io/config.hash":"dd430d3fee522dd0b056f45fed60855c","kubernetes.io/config.mirror":"dd430d3fee522dd0b056f45fed60855c","kubernetes.io/config.seen":"2024-04-20T00:13:25.764687006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6571 chars]
	I0419 17:15:47.849645    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:47.849715    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:47.849715    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:47.849715    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:47.849913    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:47.853440    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:47.853529    6268 round_trippers.go:580]     Audit-Id: cfdd6bce-7b05-4627-a8d7-dbafd4e8ee47
	I0419 17:15:47.853529    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:47.853529    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:47.853529    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:47.853529    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:47.853529    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:47 GMT
	I0419 17:15:47.853924    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:48.345024    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/etcd-functional-614300
	I0419 17:15:48.345380    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:48.345446    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:48.345446    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:48.350060    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:48.350137    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:48.350137    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:48.350137    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:48.350137    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:48.350137    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:48.350137    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:48 GMT
	I0419 17:15:48.350137    6268 round_trippers.go:580]     Audit-Id: 2bae854d-d652-4ca1-bf7f-7f06386829d6
	I0419 17:15:48.350137    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-614300","namespace":"kube-system","uid":"dac02a21-acdc-4e45-8b20-1f96f98862fb","resourceVersion":"556","creationTimestamp":"2024-04-20T00:13:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.34.3:2379","kubernetes.io/config.hash":"dd430d3fee522dd0b056f45fed60855c","kubernetes.io/config.mirror":"dd430d3fee522dd0b056f45fed60855c","kubernetes.io/config.seen":"2024-04-20T00:13:25.764687006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6571 chars]
	I0419 17:15:48.350861    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:48.350861    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:48.350861    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:48.350861    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:48.360188    6268 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0419 17:15:48.361573    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:48.361714    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:48.361714    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:48.361714    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:48.361714    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:48 GMT
	I0419 17:15:48.361714    6268 round_trippers.go:580]     Audit-Id: 52042417-268b-4c9e-82de-93200b8e946f
	I0419 17:15:48.361714    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:48.361714    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:48.362455    6268 pod_ready.go:102] pod "etcd-functional-614300" in "kube-system" namespace has status "Ready":"False"
	I0419 17:15:48.841949    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/etcd-functional-614300
	I0419 17:15:48.841949    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:48.841949    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:48.841949    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:48.845498    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:48.845498    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:48.845498    6268 round_trippers.go:580]     Audit-Id: 88ea4ae7-fa19-42da-a7ff-8044ae4c165f
	I0419 17:15:48.845498    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:48.845498    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:48.845498    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:48.845498    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:48.845498    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:48 GMT
	I0419 17:15:48.846159    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-614300","namespace":"kube-system","uid":"dac02a21-acdc-4e45-8b20-1f96f98862fb","resourceVersion":"591","creationTimestamp":"2024-04-20T00:13:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.34.3:2379","kubernetes.io/config.hash":"dd430d3fee522dd0b056f45fed60855c","kubernetes.io/config.mirror":"dd430d3fee522dd0b056f45fed60855c","kubernetes.io/config.seen":"2024-04-20T00:13:25.764687006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6347 chars]
	I0419 17:15:48.846898    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:48.846898    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:48.846898    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:48.846898    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:48.847542    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:48.847542    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:48.847542    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:48.847542    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:48.847542    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:48.847542    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:48 GMT
	I0419 17:15:48.847542    6268 round_trippers.go:580]     Audit-Id: c55fde4e-dce2-436e-bb04-4fc37cc0c6c8
	I0419 17:15:48.851493    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:48.851763    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:48.851961    6268 pod_ready.go:92] pod "etcd-functional-614300" in "kube-system" namespace has status "Ready":"True"
	I0419 17:15:48.851961    6268 pod_ready.go:81] duration metric: took 2.5117419s for pod "etcd-functional-614300" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:48.851961    6268 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-614300" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:48.851961    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-614300
	I0419 17:15:48.851961    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:48.851961    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:48.851961    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:48.852638    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:48.855702    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:48.855702    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:48.855702    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:48.855702    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:48 GMT
	I0419 17:15:48.855702    6268 round_trippers.go:580]     Audit-Id: 38d03285-83ed-4a27-9d04-1eea93366eb8
	I0419 17:15:48.855702    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:48.855702    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:48.856069    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-614300","namespace":"kube-system","uid":"6f4cb4ed-ce0c-4230-bf83-202649a788bf","resourceVersion":"582","creationTimestamp":"2024-04-20T00:13:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.34.3:8441","kubernetes.io/config.hash":"dda79ad7643f6f0a709844c0e7181f0e","kubernetes.io/config.mirror":"dda79ad7643f6f0a709844c0e7181f0e","kubernetes.io/config.seen":"2024-04-20T00:13:25.764692506Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8134 chars]
	I0419 17:15:48.856241    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:48.856769    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:48.856769    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:48.856769    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:48.857011    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:48.857011    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:48.857011    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:48.857011    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:48.857011    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:48.857011    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:48 GMT
	I0419 17:15:48.857011    6268 round_trippers.go:580]     Audit-Id: be5a02d5-9a33-4fef-87ac-5508838d3ac1
	I0419 17:15:48.857011    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:48.857011    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:48.861652    6268 pod_ready.go:92] pod "kube-apiserver-functional-614300" in "kube-system" namespace has status "Ready":"True"
	I0419 17:15:48.861652    6268 pod_ready.go:81] duration metric: took 9.6904ms for pod "kube-apiserver-functional-614300" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:48.861652    6268 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-614300" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:48.861981    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-614300
	I0419 17:15:48.861981    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:48.862061    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:48.862061    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:48.869934    6268 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 17:15:48.869934    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:48.869934    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:48.869934    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:48 GMT
	I0419 17:15:48.869934    6268 round_trippers.go:580]     Audit-Id: b63bacff-7925-41c2-8af9-f9b03b26a314
	I0419 17:15:48.869934    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:48.869934    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:48.869934    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:48.870690    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-614300","namespace":"kube-system","uid":"622f8cac-6843-48b3-bb2e-0cdec34d13e1","resourceVersion":"587","creationTimestamp":"2024-04-20T00:13:25Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9bece462aca006eeb9b812aa025158d7","kubernetes.io/config.mirror":"9bece462aca006eeb9b812aa025158d7","kubernetes.io/config.seen":"2024-04-20T00:13:25.764693906Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7714 chars]
	I0419 17:15:48.871240    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:48.871240    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:48.871240    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:48.871240    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:48.873068    6268 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:15:48.873068    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:48.873068    6268 round_trippers.go:580]     Audit-Id: 8615b760-8b7e-4bc3-9f98-7e16082ca4b2
	I0419 17:15:48.873068    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:48.873068    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:48.873068    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:48.873068    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:48.873068    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:48 GMT
	I0419 17:15:48.876137    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:48.876598    6268 pod_ready.go:92] pod "kube-controller-manager-functional-614300" in "kube-system" namespace has status "Ready":"True"
	I0419 17:15:48.876598    6268 pod_ready.go:81] duration metric: took 14.6842ms for pod "kube-controller-manager-functional-614300" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:48.876598    6268 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lrzcm" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:48.876598    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/kube-proxy-lrzcm
	I0419 17:15:48.876598    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:48.876598    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:48.876598    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:48.877178    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:48.879987    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:48.880042    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:48.880042    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:48.880042    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:48 GMT
	I0419 17:15:48.880042    6268 round_trippers.go:580]     Audit-Id: e5e7a95b-c05c-4a66-971f-b5f21b1589ad
	I0419 17:15:48.880042    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:48.880042    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:48.880148    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lrzcm","generateName":"kube-proxy-","namespace":"kube-system","uid":"9e920e7a-025c-40cd-8100-e279d31a6a36","resourceVersion":"580","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1a2d9b60-5b18-4ede-a5e7-690f791fa1d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1a2d9b60-5b18-4ede-a5e7-690f791fa1d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6165 chars]
	I0419 17:15:48.880476    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:48.880476    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:48.880476    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:48.880476    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:48.881759    6268 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:15:48.881759    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:48.881759    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:48.883427    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:48.883427    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:48.883427    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:48 GMT
	I0419 17:15:48.883427    6268 round_trippers.go:580]     Audit-Id: 0ce5124d-af0e-454e-b89e-0f5fa9ab9601
	I0419 17:15:48.883427    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:48.883785    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:48.884045    6268 pod_ready.go:92] pod "kube-proxy-lrzcm" in "kube-system" namespace has status "Ready":"True"
	I0419 17:15:48.884045    6268 pod_ready.go:81] duration metric: took 7.4475ms for pod "kube-proxy-lrzcm" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:48.884045    6268 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-614300" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:48.884045    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-614300
	I0419 17:15:48.884045    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:48.884045    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:48.884045    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:48.889377    6268 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:15:48.889377    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:48.889377    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:48.889377    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:48.889377    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:48 GMT
	I0419 17:15:48.889377    6268 round_trippers.go:580]     Audit-Id: 95296c1a-4b66-4b3b-981d-78407ac454f3
	I0419 17:15:48.889377    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:48.889377    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:48.889377    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-614300","namespace":"kube-system","uid":"e53ec63d-4823-4689-9039-8eee1c8f8549","resourceVersion":"557","creationTimestamp":"2024-04-20T00:13:25Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5486b6c55771cb4242a37f64de866e73","kubernetes.io/config.mirror":"5486b6c55771cb4242a37f64de866e73","kubernetes.io/config.seen":"2024-04-20T00:13:17.570815911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5436 chars]
	I0419 17:15:48.889929    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:48.889929    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:48.889929    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:48.889929    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:48.896120    6268 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:15:48.897324    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:48.897324    6268 round_trippers.go:580]     Audit-Id: 561ee5b1-141a-4f35-9197-b2b9ff4e08cc
	I0419 17:15:48.897324    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:48.897357    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:48.897357    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:48.897357    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:48.897357    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:48 GMT
	I0419 17:15:48.897357    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:49.387080    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-614300
	I0419 17:15:49.387324    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:49.387388    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:49.387388    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:49.388107    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:49.388107    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:49.391532    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:49.391532    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:49 GMT
	I0419 17:15:49.391532    6268 round_trippers.go:580]     Audit-Id: 2f1cd4cb-0fdc-44e0-a72f-06bd1f13d830
	I0419 17:15:49.391532    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:49.391532    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:49.391532    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:49.391840    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-614300","namespace":"kube-system","uid":"e53ec63d-4823-4689-9039-8eee1c8f8549","resourceVersion":"557","creationTimestamp":"2024-04-20T00:13:25Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5486b6c55771cb4242a37f64de866e73","kubernetes.io/config.mirror":"5486b6c55771cb4242a37f64de866e73","kubernetes.io/config.seen":"2024-04-20T00:13:17.570815911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5436 chars]
	I0419 17:15:49.392547    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:49.392604    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:49.392604    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:49.392604    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:49.392755    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:49.395163    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:49.395163    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:49.395163    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:49.395163    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:49.395163    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:49 GMT
	I0419 17:15:49.395163    6268 round_trippers.go:580]     Audit-Id: 5523a624-4cc9-4945-950e-7bb792830206
	I0419 17:15:49.395163    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:49.395338    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:49.888969    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-614300
	I0419 17:15:49.889236    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:49.889236    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:49.889236    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:49.889591    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:49.889591    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:49.889591    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:49 GMT
	I0419 17:15:49.889591    6268 round_trippers.go:580]     Audit-Id: 93d1169f-18b6-40d2-9750-133fa5aa23e6
	I0419 17:15:49.889591    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:49.889591    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:49.893362    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:49.893362    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:49.893622    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-614300","namespace":"kube-system","uid":"e53ec63d-4823-4689-9039-8eee1c8f8549","resourceVersion":"557","creationTimestamp":"2024-04-20T00:13:25Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5486b6c55771cb4242a37f64de866e73","kubernetes.io/config.mirror":"5486b6c55771cb4242a37f64de866e73","kubernetes.io/config.seen":"2024-04-20T00:13:17.570815911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5436 chars]
	I0419 17:15:49.894230    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:49.894345    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:49.894345    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:49.894345    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:49.900802    6268 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:15:49.900802    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:49.900802    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:49 GMT
	I0419 17:15:49.900802    6268 round_trippers.go:580]     Audit-Id: 2d352e9d-cefc-4fa0-8e6b-c44d43e5312f
	I0419 17:15:49.900802    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:49.900802    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:49.900802    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:49.900802    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:49.901573    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:50.385980    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-614300
	I0419 17:15:50.385980    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:50.385980    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:50.385980    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:50.386525    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:50.390279    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:50.390279    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:50.390279    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:50.390279    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:50.390279    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:50.390279    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:50 GMT
	I0419 17:15:50.390279    6268 round_trippers.go:580]     Audit-Id: 1aca7bfd-f477-4450-b570-6caef17b6842
	I0419 17:15:50.390588    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-614300","namespace":"kube-system","uid":"e53ec63d-4823-4689-9039-8eee1c8f8549","resourceVersion":"557","creationTimestamp":"2024-04-20T00:13:25Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5486b6c55771cb4242a37f64de866e73","kubernetes.io/config.mirror":"5486b6c55771cb4242a37f64de866e73","kubernetes.io/config.seen":"2024-04-20T00:13:17.570815911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5436 chars]
	I0419 17:15:50.391647    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:50.391647    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:50.391647    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:50.391647    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:50.391954    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:50.391954    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:50.391954    6268 round_trippers.go:580]     Audit-Id: 37c45803-3e82-4959-9d91-44e1ca623c03
	I0419 17:15:50.391954    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:50.391954    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:50.391954    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:50.394600    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:50.394600    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:50 GMT
	I0419 17:15:50.394737    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:50.893908    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-614300
	I0419 17:15:50.894189    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:50.894189    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:50.894189    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:50.894512    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:50.898444    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:50.898444    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:50.898444    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:50.898523    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:50 GMT
	I0419 17:15:50.898523    6268 round_trippers.go:580]     Audit-Id: 871cd681-c45f-405f-a868-71fadfd68b5e
	I0419 17:15:50.898523    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:50.898523    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:50.898799    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-614300","namespace":"kube-system","uid":"e53ec63d-4823-4689-9039-8eee1c8f8549","resourceVersion":"595","creationTimestamp":"2024-04-20T00:13:25Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5486b6c55771cb4242a37f64de866e73","kubernetes.io/config.mirror":"5486b6c55771cb4242a37f64de866e73","kubernetes.io/config.seen":"2024-04-20T00:13:17.570815911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5192 chars]
	I0419 17:15:50.899578    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:50.899578    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:50.899578    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:50.899578    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:50.899887    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:50.903513    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:50.903619    6268 round_trippers.go:580]     Audit-Id: bdb6f8ef-3e18-436a-9de1-2206330a524a
	I0419 17:15:50.903619    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:50.903619    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:50.903619    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:50.903619    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:50.903619    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:50 GMT
	I0419 17:15:50.903962    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:50.904314    6268 pod_ready.go:92] pod "kube-scheduler-functional-614300" in "kube-system" namespace has status "Ready":"True"
	I0419 17:15:50.904314    6268 pod_ready.go:81] duration metric: took 2.0202637s for pod "kube-scheduler-functional-614300" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:50.904314    6268 pod_ready.go:38] duration metric: took 13.5904017s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 17:15:50.904314    6268 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0419 17:15:50.928112    6268 command_runner.go:130] > -16
	I0419 17:15:50.928232    6268 ops.go:34] apiserver oom_adj: -16
	I0419 17:15:50.928232    6268 kubeadm.go:591] duration metric: took 23.7582367s to restartPrimaryControlPlane
	I0419 17:15:50.928232    6268 kubeadm.go:393] duration metric: took 23.8668277s to StartCluster
	I0419 17:15:50.928306    6268 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:15:50.928396    6268 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 17:15:50.929805    6268 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:15:50.930919    6268 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.34.3 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 17:15:50.930919    6268 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0419 17:15:50.931501    6268 addons.go:69] Setting storage-provisioner=true in profile "functional-614300"
	I0419 17:15:50.931501    6268 addons.go:234] Setting addon storage-provisioner=true in "functional-614300"
	I0419 17:15:50.931567    6268 addons.go:69] Setting default-storageclass=true in profile "functional-614300"
	W0419 17:15:50.931610    6268 addons.go:243] addon storage-provisioner should already be in state true
	I0419 17:15:50.931734    6268 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-614300"
	I0419 17:15:50.935852    6268 out.go:177] * Verifying Kubernetes components...
	I0419 17:15:50.931830    6268 config.go:182] Loaded profile config "functional-614300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:15:50.931933    6268 host.go:66] Checking if "functional-614300" exists ...
	I0419 17:15:50.932732    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:15:50.937164    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:15:50.952439    6268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:15:51.263313    6268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 17:15:51.298140    6268 node_ready.go:35] waiting up to 6m0s for node "functional-614300" to be "Ready" ...
	I0419 17:15:51.298277    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:51.298391    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:51.298391    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:51.298391    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:51.302590    6268 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:15:51.302590    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:51.302590    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:51 GMT
	I0419 17:15:51.302590    6268 round_trippers.go:580]     Audit-Id: 7dbca971-147e-40d0-9f5a-1794b73253a0
	I0419 17:15:51.302590    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:51.302731    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:51.302731    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:51.302731    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:51.303041    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:51.303450    6268 node_ready.go:49] node "functional-614300" has status "Ready":"True"
	I0419 17:15:51.303450    6268 node_ready.go:38] duration metric: took 5.3101ms for node "functional-614300" to be "Ready" ...
	I0419 17:15:51.303450    6268 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 17:15:51.303450    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods
	I0419 17:15:51.303450    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:51.304001    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:51.304067    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:51.307612    6268 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:15:51.310055    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:51.310125    6268 round_trippers.go:580]     Audit-Id: 38fcff98-3883-4bca-82f1-9d606258a521
	I0419 17:15:51.310125    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:51.310125    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:51.310125    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:51.310196    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:51.310196    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:51 GMT
	I0419 17:15:51.311768    6268 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"595"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"585","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50650 chars]
	I0419 17:15:51.314611    6268 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b25zx" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:51.314714    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b25zx
	I0419 17:15:51.314714    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:51.314809    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:51.314809    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:51.318031    6268 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:15:51.319237    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:51.319237    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:51.319237    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:51.319237    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:51.319317    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:51 GMT
	I0419 17:15:51.319317    6268 round_trippers.go:580]     Audit-Id: 69a0f294-3738-457c-9982-bba3cc2b01f9
	I0419 17:15:51.319354    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:51.319509    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"585","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6446 chars]
	I0419 17:15:51.321308    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:51.321308    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:51.321367    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:51.321367    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:51.325647    6268 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:15:51.325713    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:51.325713    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:51.325713    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:51.325713    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:51.325713    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:51.325713    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:51 GMT
	I0419 17:15:51.325713    6268 round_trippers.go:580]     Audit-Id: 07f6c7c6-c8bd-4e4c-a992-6ddbccdd7150
	I0419 17:15:51.325713    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:51.326826    6268 pod_ready.go:92] pod "coredns-7db6d8ff4d-b25zx" in "kube-system" namespace has status "Ready":"True"
	I0419 17:15:51.326826    6268 pod_ready.go:81] duration metric: took 12.215ms for pod "coredns-7db6d8ff4d-b25zx" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:51.326883    6268 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-614300" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:51.444364    6268 request.go:629] Waited for 117.3049ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/etcd-functional-614300
	I0419 17:15:51.444610    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/etcd-functional-614300
	I0419 17:15:51.444610    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:51.444610    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:51.444724    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:51.444955    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:51.448944    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:51.448944    6268 round_trippers.go:580]     Audit-Id: d7f1515d-63fb-4816-ab96-db1733c24f3d
	I0419 17:15:51.448944    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:51.448944    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:51.448944    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:51.448944    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:51.449121    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:51 GMT
	I0419 17:15:51.449413    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-614300","namespace":"kube-system","uid":"dac02a21-acdc-4e45-8b20-1f96f98862fb","resourceVersion":"591","creationTimestamp":"2024-04-20T00:13:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.34.3:2379","kubernetes.io/config.hash":"dd430d3fee522dd0b056f45fed60855c","kubernetes.io/config.mirror":"dd430d3fee522dd0b056f45fed60855c","kubernetes.io/config.seen":"2024-04-20T00:13:25.764687006Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6347 chars]
	I0419 17:15:51.649144    6268 request.go:629] Waited for 198.8829ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:51.649369    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:51.649369    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:51.649369    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:51.649369    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:51.655127    6268 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:15:51.655192    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:51.655192    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:51.655192    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:51.655192    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:51.655192    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:51 GMT
	I0419 17:15:51.655192    6268 round_trippers.go:580]     Audit-Id: ac2868e1-a83d-48b0-87ce-25ac2c24969b
	I0419 17:15:51.655192    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:51.655614    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:51.656135    6268 pod_ready.go:92] pod "etcd-functional-614300" in "kube-system" namespace has status "Ready":"True"
	I0419 17:15:51.656256    6268 pod_ready.go:81] duration metric: took 329.3093ms for pod "etcd-functional-614300" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:51.656256    6268 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-614300" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:51.859195    6268 request.go:629] Waited for 202.6237ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-614300
	I0419 17:15:51.859423    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-614300
	I0419 17:15:51.859423    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:51.859539    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:51.859539    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:51.859804    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:51.859804    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:51.859804    6268 round_trippers.go:580]     Audit-Id: 4327edc4-edf2-4154-b613-62bb8b5b1cf2
	I0419 17:15:51.863711    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:51.863711    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:51.863711    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:51.863711    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:51.863711    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:51 GMT
	I0419 17:15:51.863983    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-614300","namespace":"kube-system","uid":"6f4cb4ed-ce0c-4230-bf83-202649a788bf","resourceVersion":"582","creationTimestamp":"2024-04-20T00:13:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.34.3:8441","kubernetes.io/config.hash":"dda79ad7643f6f0a709844c0e7181f0e","kubernetes.io/config.mirror":"dda79ad7643f6f0a709844c0e7181f0e","kubernetes.io/config.seen":"2024-04-20T00:13:25.764692506Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8134 chars]
	I0419 17:15:52.053431    6268 request.go:629] Waited for 187.6885ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:52.053618    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:52.053618    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:52.053618    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:52.053618    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:52.054295    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:52.054295    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:52.058248    6268 round_trippers.go:580]     Audit-Id: ea1f366c-1dca-47ae-87a1-89f9f4249e7d
	I0419 17:15:52.058248    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:52.058424    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:52.058424    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:52.058424    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:52.058424    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:52 GMT
	I0419 17:15:52.058645    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:52.059176    6268 pod_ready.go:92] pod "kube-apiserver-functional-614300" in "kube-system" namespace has status "Ready":"True"
	I0419 17:15:52.059249    6268 pod_ready.go:81] duration metric: took 402.9922ms for pod "kube-apiserver-functional-614300" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:52.059271    6268 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-614300" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:52.254828    6268 request.go:629] Waited for 195.4349ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-614300
	I0419 17:15:52.255070    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-614300
	I0419 17:15:52.255347    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:52.255347    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:52.255347    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:52.255595    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:52.255595    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:52.255595    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:52.255595    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:52.255595    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:52.255595    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:52 GMT
	I0419 17:15:52.255595    6268 round_trippers.go:580]     Audit-Id: 2787df5d-05b8-489c-aece-6d69f67dbc58
	I0419 17:15:52.259354    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:52.259832    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-614300","namespace":"kube-system","uid":"622f8cac-6843-48b3-bb2e-0cdec34d13e1","resourceVersion":"587","creationTimestamp":"2024-04-20T00:13:25Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9bece462aca006eeb9b812aa025158d7","kubernetes.io/config.mirror":"9bece462aca006eeb9b812aa025158d7","kubernetes.io/config.seen":"2024-04-20T00:13:25.764693906Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7714 chars]
	I0419 17:15:52.443332    6268 request.go:629] Waited for 182.9101ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:52.443772    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:52.443873    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:52.443873    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:52.443873    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:52.448987    6268 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:15:52.449691    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:52.449691    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:52.449691    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:52.449691    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:52.449691    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:52 GMT
	I0419 17:15:52.449691    6268 round_trippers.go:580]     Audit-Id: 3ddc4388-db7f-47fd-8354-9b73d343055f
	I0419 17:15:52.449691    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:52.449834    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:52.450359    6268 pod_ready.go:92] pod "kube-controller-manager-functional-614300" in "kube-system" namespace has status "Ready":"True"
	I0419 17:15:52.450359    6268 pod_ready.go:81] duration metric: took 391.0221ms for pod "kube-controller-manager-functional-614300" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:52.450359    6268 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lrzcm" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:52.644250    6268 request.go:629] Waited for 193.613ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/kube-proxy-lrzcm
	I0419 17:15:52.644317    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/kube-proxy-lrzcm
	I0419 17:15:52.644317    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:52.644317    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:52.644317    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:52.648347    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:52.648347    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:52.648347    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:52.648347    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:52.648347    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:52.648347    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:52 GMT
	I0419 17:15:52.648347    6268 round_trippers.go:580]     Audit-Id: 2c9b7a14-d22a-495e-bff8-88d40ee84825
	I0419 17:15:52.648347    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:52.648808    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lrzcm","generateName":"kube-proxy-","namespace":"kube-system","uid":"9e920e7a-025c-40cd-8100-e279d31a6a36","resourceVersion":"580","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1a2d9b60-5b18-4ede-a5e7-690f791fa1d9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1a2d9b60-5b18-4ede-a5e7-690f791fa1d9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6165 chars]
	I0419 17:15:52.854546    6268 request.go:629] Waited for 204.7404ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:52.854546    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:52.854885    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:52.854885    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:52.854885    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:52.867570    6268 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0419 17:15:52.867570    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:52.867635    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:52.867635    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:52.867635    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:52.867635    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:52 GMT
	I0419 17:15:52.867635    6268 round_trippers.go:580]     Audit-Id: 1803c5f7-b739-426f-9bd7-87c8209e95f7
	I0419 17:15:52.867635    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:52.867934    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:52.868603    6268 pod_ready.go:92] pod "kube-proxy-lrzcm" in "kube-system" namespace has status "Ready":"True"
	I0419 17:15:52.868656    6268 pod_ready.go:81] duration metric: took 418.2961ms for pod "kube-proxy-lrzcm" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:52.868656    6268 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-614300" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:53.046719    6268 request.go:629] Waited for 177.7356ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-614300
	I0419 17:15:53.046831    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-614300
	I0419 17:15:53.046831    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:53.046831    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:53.046831    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:53.047989    6268 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:15:53.054642    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:53.054642    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:53 GMT
	I0419 17:15:53.054642    6268 round_trippers.go:580]     Audit-Id: 46d63fae-066f-4031-bda8-33eb372d9962
	I0419 17:15:53.054642    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:53.054642    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:53.054642    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:53.054642    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:53.055104    6268 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-614300","namespace":"kube-system","uid":"e53ec63d-4823-4689-9039-8eee1c8f8549","resourceVersion":"595","creationTimestamp":"2024-04-20T00:13:25Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5486b6c55771cb4242a37f64de866e73","kubernetes.io/config.mirror":"5486b6c55771cb4242a37f64de866e73","kubernetes.io/config.seen":"2024-04-20T00:13:17.570815911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5192 chars]
	I0419 17:15:53.104977    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:15:53.104977    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:15:53.118988    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:15:53.118906    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:15:53.124431    6268 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 17:15:53.120100    6268 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 17:15:53.127458    6268 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 17:15:53.127458    6268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0419 17:15:53.127774    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:15:53.127936    6268 kapi.go:59] client config for functional-614300: &rest.Config{Host:"https://172.19.34.3:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-614300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-614300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c35620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 17:15:53.127936    6268 addons.go:234] Setting addon default-storageclass=true in "functional-614300"
	W0419 17:15:53.127936    6268 addons.go:243] addon default-storageclass should already be in state true
	I0419 17:15:53.127936    6268 host.go:66] Checking if "functional-614300" exists ...
	I0419 17:15:53.129712    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:15:53.252342    6268 request.go:629] Waited for 196.7302ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:53.252342    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes/functional-614300
	I0419 17:15:53.252342    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:53.252342    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:53.252342    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:53.257572    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:53.257572    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:53.257663    6268 round_trippers.go:580]     Audit-Id: c63de90c-14ab-4849-9789-6f79da85c353
	I0419 17:15:53.257663    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:53.257737    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:53.257737    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:53.257775    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:53.257775    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:53 GMT
	I0419 17:15:53.258044    6268 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-20T00:13:22Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0419 17:15:53.258782    6268 pod_ready.go:92] pod "kube-scheduler-functional-614300" in "kube-system" namespace has status "Ready":"True"
	I0419 17:15:53.258782    6268 pod_ready.go:81] duration metric: took 390.1254ms for pod "kube-scheduler-functional-614300" in "kube-system" namespace to be "Ready" ...
	I0419 17:15:53.258782    6268 pod_ready.go:38] duration metric: took 1.9553271s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 17:15:53.258782    6268 api_server.go:52] waiting for apiserver process to appear ...
	I0419 17:15:53.275900    6268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 17:15:53.311796    6268 command_runner.go:130] > 5772
	I0419 17:15:53.311933    6268 api_server.go:72] duration metric: took 2.3810075s to wait for apiserver process to appear ...
	I0419 17:15:53.312057    6268 api_server.go:88] waiting for apiserver healthz status ...
	I0419 17:15:53.312142    6268 api_server.go:253] Checking apiserver healthz at https://172.19.34.3:8441/healthz ...
	I0419 17:15:53.319550    6268 api_server.go:279] https://172.19.34.3:8441/healthz returned 200:
	ok
	I0419 17:15:53.320475    6268 round_trippers.go:463] GET https://172.19.34.3:8441/version
	I0419 17:15:53.320523    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:53.320523    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:53.320578    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:53.321462    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:53.321462    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:53.321462    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:53 GMT
	I0419 17:15:53.322541    6268 round_trippers.go:580]     Audit-Id: ef17d41e-9ba5-4126-97bc-851f6b11e38b
	I0419 17:15:53.322541    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:53.322541    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:53.322541    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:53.322541    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:53.322633    6268 round_trippers.go:580]     Content-Length: 263
	I0419 17:15:53.322689    6268 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0419 17:15:53.322772    6268 api_server.go:141] control plane version: v1.30.0
	I0419 17:15:53.322772    6268 api_server.go:131] duration metric: took 10.7146ms to wait for apiserver health ...
	I0419 17:15:53.322772    6268 system_pods.go:43] waiting for kube-system pods to appear ...
	I0419 17:15:53.449252    6268 request.go:629] Waited for 126.2706ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods
	I0419 17:15:53.449252    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods
	I0419 17:15:53.449252    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:53.449252    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:53.449435    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:53.449850    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:53.454723    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:53.454723    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:53.454723    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:53.454723    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:53.454723    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:53 GMT
	I0419 17:15:53.454723    6268 round_trippers.go:580]     Audit-Id: f59a1490-45a5-4103-a953-13160df3ccb6
	I0419 17:15:53.454723    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:53.456504    6268 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"595"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"585","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50650 chars]
	I0419 17:15:53.460742    6268 system_pods.go:59] 7 kube-system pods found
	I0419 17:15:53.460742    6268 system_pods.go:61] "coredns-7db6d8ff4d-b25zx" [fd0e7b75-307a-47c4-9f4f-a24534fc157e] Running
	I0419 17:15:53.460849    6268 system_pods.go:61] "etcd-functional-614300" [dac02a21-acdc-4e45-8b20-1f96f98862fb] Running
	I0419 17:15:53.460849    6268 system_pods.go:61] "kube-apiserver-functional-614300" [6f4cb4ed-ce0c-4230-bf83-202649a788bf] Running
	I0419 17:15:53.460849    6268 system_pods.go:61] "kube-controller-manager-functional-614300" [622f8cac-6843-48b3-bb2e-0cdec34d13e1] Running
	I0419 17:15:53.460849    6268 system_pods.go:61] "kube-proxy-lrzcm" [9e920e7a-025c-40cd-8100-e279d31a6a36] Running
	I0419 17:15:53.460849    6268 system_pods.go:61] "kube-scheduler-functional-614300" [e53ec63d-4823-4689-9039-8eee1c8f8549] Running
	I0419 17:15:53.460849    6268 system_pods.go:61] "storage-provisioner" [04f6a541-81e8-4d8a-bab2-51e0112a9d5c] Running
	I0419 17:15:53.460849    6268 system_pods.go:74] duration metric: took 138.077ms to wait for pod list to return data ...
	I0419 17:15:53.460849    6268 default_sa.go:34] waiting for default service account to be created ...
	I0419 17:15:53.644717    6268 request.go:629] Waited for 183.6938ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.34.3:8441/api/v1/namespaces/default/serviceaccounts
	I0419 17:15:53.644717    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/default/serviceaccounts
	I0419 17:15:53.644717    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:53.644717    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:53.644717    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:53.645474    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:53.650377    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:53.650377    6268 round_trippers.go:580]     Audit-Id: 121eeb3c-23dc-452a-aea7-77c9aaaef78b
	I0419 17:15:53.650377    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:53.650377    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:53.650377    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:53.650377    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:53.650377    6268 round_trippers.go:580]     Content-Length: 261
	I0419 17:15:53.650377    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:53 GMT
	I0419 17:15:53.650519    6268 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"595"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"d6473af0-6d1b-47f2-8f33-9a53d26da38f","resourceVersion":"344","creationTimestamp":"2024-04-20T00:13:39Z"}}]}
	I0419 17:15:53.651065    6268 default_sa.go:45] found service account: "default"
	I0419 17:15:53.651147    6268 default_sa.go:55] duration metric: took 190.2976ms for default service account to be created ...
	I0419 17:15:53.651147    6268 system_pods.go:116] waiting for k8s-apps to be running ...
	I0419 17:15:53.849242    6268 request.go:629] Waited for 197.8138ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods
	I0419 17:15:53.849494    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/namespaces/kube-system/pods
	I0419 17:15:53.849602    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:53.849602    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:53.849602    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:53.850026    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:53.850026    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:53.850026    6268 round_trippers.go:580]     Audit-Id: 36eee07e-a147-431f-a18a-39988a55de18
	I0419 17:15:53.854860    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:53.854860    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:53.854860    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:53.854860    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:53.854860    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:53 GMT
	I0419 17:15:53.855641    6268 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"595"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-b25zx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fd0e7b75-307a-47c4-9f4f-a24534fc157e","resourceVersion":"585","creationTimestamp":"2024-04-20T00:13:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T00:13:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6eb8ad8c-adab-4ad5-80ce-6d8bde18ffb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50650 chars]
	I0419 17:15:53.858336    6268 system_pods.go:86] 7 kube-system pods found
	I0419 17:15:53.858336    6268 system_pods.go:89] "coredns-7db6d8ff4d-b25zx" [fd0e7b75-307a-47c4-9f4f-a24534fc157e] Running
	I0419 17:15:53.858336    6268 system_pods.go:89] "etcd-functional-614300" [dac02a21-acdc-4e45-8b20-1f96f98862fb] Running
	I0419 17:15:53.858336    6268 system_pods.go:89] "kube-apiserver-functional-614300" [6f4cb4ed-ce0c-4230-bf83-202649a788bf] Running
	I0419 17:15:53.858336    6268 system_pods.go:89] "kube-controller-manager-functional-614300" [622f8cac-6843-48b3-bb2e-0cdec34d13e1] Running
	I0419 17:15:53.858336    6268 system_pods.go:89] "kube-proxy-lrzcm" [9e920e7a-025c-40cd-8100-e279d31a6a36] Running
	I0419 17:15:53.858336    6268 system_pods.go:89] "kube-scheduler-functional-614300" [e53ec63d-4823-4689-9039-8eee1c8f8549] Running
	I0419 17:15:53.858336    6268 system_pods.go:89] "storage-provisioner" [04f6a541-81e8-4d8a-bab2-51e0112a9d5c] Running
	I0419 17:15:53.858336    6268 system_pods.go:126] duration metric: took 207.1881ms to wait for k8s-apps to be running ...
	I0419 17:15:53.858336    6268 system_svc.go:44] waiting for kubelet service to be running ....
	I0419 17:15:53.867296    6268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 17:15:53.900705    6268 system_svc.go:56] duration metric: took 42.3691ms WaitForService to wait for kubelet
	I0419 17:15:53.900830    6268 kubeadm.go:576] duration metric: took 2.9697783s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 17:15:53.900830    6268 node_conditions.go:102] verifying NodePressure condition ...
	I0419 17:15:54.053595    6268 request.go:629] Waited for 152.5073ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.34.3:8441/api/v1/nodes
	I0419 17:15:54.053684    6268 round_trippers.go:463] GET https://172.19.34.3:8441/api/v1/nodes
	I0419 17:15:54.053684    6268 round_trippers.go:469] Request Headers:
	I0419 17:15:54.053873    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:15:54.053903    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:15:54.054695    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:15:54.058542    6268 round_trippers.go:577] Response Headers:
	I0419 17:15:54.058542    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:15:54.058542    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:15:54.058542    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:15:54.058542    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:15:54.058542    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:15:54 GMT
	I0419 17:15:54.058631    6268 round_trippers.go:580]     Audit-Id: 84d52bbc-20d3-42aa-a23b-03c2b37050bf
	I0419 17:15:54.058832    6268 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"595"},"items":[{"metadata":{"name":"functional-614300","uid":"ed394da3-0edb-4926-85fa-c49360a6d792","resourceVersion":"516","creationTimestamp":"2024-04-20T00:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-614300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"functional-614300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T17_13_26_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4838 chars]
	I0419 17:15:54.058897    6268 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 17:15:54.058897    6268 node_conditions.go:123] node cpu capacity is 2
	I0419 17:15:54.058897    6268 node_conditions.go:105] duration metric: took 158.0666ms to run NodePressure ...
	I0419 17:15:54.058897    6268 start.go:240] waiting for startup goroutines ...
	I0419 17:15:55.259876    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:15:55.259876    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:15:55.272285    6268 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0419 17:15:55.272421    6268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0419 17:15:55.272421    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
	I0419 17:15:55.284935    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:15:55.284935    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:15:55.284935    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
	I0419 17:15:57.398802    6268 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:15:57.410690    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:15:57.410690    6268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
	I0419 17:15:57.812704    6268 main.go:141] libmachine: [stdout =====>] : 172.19.34.3
	
	I0419 17:15:57.812704    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:15:57.826374    6268 sshutil.go:53] new ssh client: &{IP:172.19.34.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-614300\id_rsa Username:docker}
	I0419 17:15:57.974229    6268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 17:15:58.789742    6268 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0419 17:15:58.792069    6268 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0419 17:15:58.792115    6268 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0419 17:15:58.792115    6268 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0419 17:15:58.792198    6268 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0419 17:15:58.792267    6268 command_runner.go:130] > pod/storage-provisioner configured
	I0419 17:15:59.903085    6268 main.go:141] libmachine: [stdout =====>] : 172.19.34.3
	
	I0419 17:15:59.903085    6268 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:15:59.916457    6268 sshutil.go:53] new ssh client: &{IP:172.19.34.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-614300\id_rsa Username:docker}
	I0419 17:16:00.063293    6268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0419 17:16:00.253129    6268 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0419 17:16:00.253559    6268 round_trippers.go:463] GET https://172.19.34.3:8441/apis/storage.k8s.io/v1/storageclasses
	I0419 17:16:00.253559    6268 round_trippers.go:469] Request Headers:
	I0419 17:16:00.253559    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:16:00.253559    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:16:00.254142    6268 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:16:00.254142    6268 round_trippers.go:577] Response Headers:
	I0419 17:16:00.254142    6268 round_trippers.go:580]     Audit-Id: f64408e6-09bb-4550-aa49-de30a7573139
	I0419 17:16:00.254142    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:16:00.254142    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:16:00.254142    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:16:00.254142    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:16:00.254142    6268 round_trippers.go:580]     Content-Length: 1273
	I0419 17:16:00.254142    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:16:00 GMT
	I0419 17:16:00.254142    6268 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"602"},"items":[{"metadata":{"name":"standard","uid":"b370b471-131d-4c61-9a93-4a6395a3530e","resourceVersion":"440","creationTimestamp":"2024-04-20T00:13:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-20T00:13:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0419 17:16:00.258423    6268 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b370b471-131d-4c61-9a93-4a6395a3530e","resourceVersion":"440","creationTimestamp":"2024-04-20T00:13:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-20T00:13:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0419 17:16:00.258423    6268 round_trippers.go:463] PUT https://172.19.34.3:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0419 17:16:00.258423    6268 round_trippers.go:469] Request Headers:
	I0419 17:16:00.258546    6268 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:16:00.258546    6268 round_trippers.go:473]     Content-Type: application/json
	I0419 17:16:00.258546    6268 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:16:00.260031    6268 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:16:00.260031    6268 round_trippers.go:577] Response Headers:
	I0419 17:16:00.260031    6268 round_trippers.go:580]     Audit-Id: 10a182cf-c60b-42eb-88fc-522a18bf9361
	I0419 17:16:00.260031    6268 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 17:16:00.260031    6268 round_trippers.go:580]     Content-Type: application/json
	I0419 17:16:00.260031    6268 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac05c8c8-c9b3-4ebb-af83-96eac9a7ce5c
	I0419 17:16:00.260031    6268 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8b4cc801-bf60-4b22-a725-f6a0b3aaf223
	I0419 17:16:00.263408    6268 round_trippers.go:580]     Content-Length: 1220
	I0419 17:16:00.263408    6268 round_trippers.go:580]     Date: Sat, 20 Apr 2024 00:16:00 GMT
	I0419 17:16:00.263488    6268 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b370b471-131d-4c61-9a93-4a6395a3530e","resourceVersion":"440","creationTimestamp":"2024-04-20T00:13:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-20T00:13:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0419 17:16:00.268317    6268 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0419 17:16:00.270448    6268 addons.go:505] duration metric: took 9.3395057s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0419 17:16:00.270448    6268 start.go:245] waiting for cluster config update ...
	I0419 17:16:00.270448    6268 start.go:254] writing updated cluster config ...
	I0419 17:16:00.286730    6268 ssh_runner.go:195] Run: rm -f paused
	I0419 17:16:00.424822    6268 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0419 17:16:00.433371    6268 out.go:177] * Done! kubectl is now configured to use "functional-614300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 20 00:15:36 functional-614300 dockerd[4134]: time="2024-04-20T00:15:36.809530687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:15:36 functional-614300 dockerd[4134]: time="2024-04-20T00:15:36.809617588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:15:36 functional-614300 dockerd[4134]: time="2024-04-20T00:15:36.849949230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 20 00:15:36 functional-614300 dockerd[4134]: time="2024-04-20T00:15:36.850847045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 20 00:15:36 functional-614300 dockerd[4134]: time="2024-04-20T00:15:36.851264451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:15:36 functional-614300 dockerd[4134]: time="2024-04-20T00:15:36.852264667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:15:36 functional-614300 dockerd[4134]: time="2024-04-20T00:15:36.909754883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 20 00:15:36 functional-614300 dockerd[4134]: time="2024-04-20T00:15:36.909864084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 20 00:15:36 functional-614300 dockerd[4134]: time="2024-04-20T00:15:36.909876985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:15:36 functional-614300 dockerd[4134]: time="2024-04-20T00:15:36.909967986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:15:37 functional-614300 cri-dockerd[4380]: time="2024-04-20T00:15:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3162d8c75061c72e5dc13bb7dbb8f9d5ee06899c0a86a73be4f47509c8eac693/resolv.conf as [nameserver 172.19.32.1]"
	Apr 20 00:15:37 functional-614300 cri-dockerd[4380]: time="2024-04-20T00:15:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a07aa207c4eb70290fbceff424a539a43c078967e95bd47acffec21d7b0f9b28/resolv.conf as [nameserver 172.19.32.1]"
	Apr 20 00:15:37 functional-614300 cri-dockerd[4380]: time="2024-04-20T00:15:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/463c1e224e946e020c2fc82105f803a8da81c3b618f1a14b5d82b0c7f59d8fe2/resolv.conf as [nameserver 172.19.32.1]"
	Apr 20 00:15:37 functional-614300 dockerd[4134]: time="2024-04-20T00:15:37.339254290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 20 00:15:37 functional-614300 dockerd[4134]: time="2024-04-20T00:15:37.339654996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 20 00:15:37 functional-614300 dockerd[4134]: time="2024-04-20T00:15:37.339753597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:15:37 functional-614300 dockerd[4134]: time="2024-04-20T00:15:37.347477906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:15:37 functional-614300 dockerd[4134]: time="2024-04-20T00:15:37.424741194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 20 00:15:37 functional-614300 dockerd[4134]: time="2024-04-20T00:15:37.424991697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 20 00:15:37 functional-614300 dockerd[4134]: time="2024-04-20T00:15:37.425084499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:15:37 functional-614300 dockerd[4134]: time="2024-04-20T00:15:37.425410903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:15:37 functional-614300 dockerd[4134]: time="2024-04-20T00:15:37.769301245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 20 00:15:37 functional-614300 dockerd[4134]: time="2024-04-20T00:15:37.769380946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 20 00:15:37 functional-614300 dockerd[4134]: time="2024-04-20T00:15:37.769394646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:15:37 functional-614300 dockerd[4134]: time="2024-04-20T00:15:37.769494447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c8a68e0dd0c7c       cbb01a7bd410d       2 minutes ago       Running             coredns                   1                   463c1e224e946       coredns-7db6d8ff4d-b25zx
	a2618b0837f0c       a0bf559e280cf       2 minutes ago       Running             kube-proxy                2                   a07aa207c4eb7       kube-proxy-lrzcm
	3d5cfc2d84df2       6e38f40d628db       2 minutes ago       Running             storage-provisioner       2                   3162d8c75061c       storage-provisioner
	b6c209c5c9941       259c8277fcbbc       2 minutes ago       Running             kube-scheduler            2                   b8c30878a3d1f       kube-scheduler-functional-614300
	752e24cb6ba7c       c42f13656d0b2       2 minutes ago       Running             kube-apiserver            2                   9a6f1510551c8       kube-apiserver-functional-614300
	4f1883cafa8f5       3861cfcd7c04c       2 minutes ago       Running             etcd                      2                   1cf01c7fc826a       etcd-functional-614300
	397eb57684ca8       c7aad43836fa5       2 minutes ago       Running             kube-controller-manager   2                   666ea745be2cc       kube-controller-manager-functional-614300
	2a1a32f7f2fe8       a0bf559e280cf       2 minutes ago       Created             kube-proxy                1                   a9371c28aa09b       kube-proxy-lrzcm
	0f79ade407a39       c42f13656d0b2       2 minutes ago       Created             kube-apiserver            1                   fe1cb30f2314a       kube-apiserver-functional-614300
	27cc426709523       c7aad43836fa5       2 minutes ago       Created             kube-controller-manager   1                   4bca19607c830       kube-controller-manager-functional-614300
	95cdd3967491e       259c8277fcbbc       2 minutes ago       Exited              kube-scheduler            1                   8b0dca2a4dce0       kube-scheduler-functional-614300
	360bf6a69c980       3861cfcd7c04c       2 minutes ago       Exited              etcd                      1                   ec335cd22217d       etcd-functional-614300
	9eee3dd7d42a7       6e38f40d628db       2 minutes ago       Exited              storage-provisioner       1                   4d16658804644       storage-provisioner
	94c6ef3b66676       cbb01a7bd410d       4 minutes ago       Exited              coredns                   0                   cc9a8bfb1f0ce       coredns-7db6d8ff4d-b25zx
	
	
	==> coredns [94c6ef3b6667] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 93714cfd58e203ac2baa48ea9c7b435951d2a9faed7a5c70b4e84c89c6c1fe4c1dfa41f14b3ebf0f5941dade673a82eaad960061e673dd78dcb856db3393b39d
	[INFO] Reloading complete
	[INFO] 127.0.0.1:40595 - 22791 "HINFO IN 4233646558473102438.6571756292180087344. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046834208s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c8a68e0dd0c7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 93714cfd58e203ac2baa48ea9c7b435951d2a9faed7a5c70b4e84c89c6c1fe4c1dfa41f14b3ebf0f5941dade673a82eaad960061e673dd78dcb856db3393b39d
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38885 - 16314 "HINFO IN 8348476683866883143.7397406876974763958. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.044077239s
	
	
	==> describe nodes <==
	Name:               functional-614300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-614300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=functional-614300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_19T17_13_26_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:13:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-614300
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:17:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:17:38 +0000   Sat, 20 Apr 2024 00:13:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:17:38 +0000   Sat, 20 Apr 2024 00:13:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:17:38 +0000   Sat, 20 Apr 2024 00:13:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:17:38 +0000   Sat, 20 Apr 2024 00:13:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.34.3
	  Hostname:    functional-614300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912864Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912864Ki
	  pods:               110
	System Info:
	  Machine ID:                 71bf18394b1b4789a459a9307a295e81
	  System UUID:                ed440e53-f288-fc4f-80cb-75acffcf5fed
	  Boot ID:                    74a9b052-ee74-46cd-b572-4e62ee5346d3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-b25zx                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m2s
	  kube-system                 etcd-functional-614300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-functional-614300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-functional-614300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-lrzcm                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-functional-614300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m                     kube-proxy       
	  Normal  Starting                 2m4s                   kube-proxy       
	  Normal  Starting                 4m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m25s (x8 over 4m25s)  kubelet          Node functional-614300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m25s (x8 over 4m25s)  kubelet          Node functional-614300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m25s (x7 over 4m25s)  kubelet          Node functional-614300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node functional-614300 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node functional-614300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node functional-614300 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeReady                4m16s                  kubelet          Node functional-614300 status is now: NodeReady
	  Normal  RegisteredNode           4m3s                   node-controller  Node functional-614300 event: Registered Node functional-614300 in Controller
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node functional-614300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node functional-614300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x7 over 2m12s)  kubelet          Node functional-614300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           114s                   node-controller  Node functional-614300 event: Registered Node functional-614300 in Controller
	
	
	==> dmesg <==
	[  +0.709909] systemd-fstab-generator[1531]: Ignoring "noauto" option for root device
	[  +6.297723] systemd-fstab-generator[1727]: Ignoring "noauto" option for root device
	[  +0.112942] kauditd_printk_skb: 51 callbacks suppressed
	[  +8.533537] systemd-fstab-generator[2127]: Ignoring "noauto" option for root device
	[  +0.129649] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.870719] systemd-fstab-generator[2363]: Ignoring "noauto" option for root device
	[  +0.177258] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.685106] kauditd_printk_skb: 90 callbacks suppressed
	[Apr20 00:15] systemd-fstab-generator[3672]: Ignoring "noauto" option for root device
	[  +0.163125] kauditd_printk_skb: 10 callbacks suppressed
	[  +0.544917] systemd-fstab-generator[3708]: Ignoring "noauto" option for root device
	[  +0.267280] systemd-fstab-generator[3720]: Ignoring "noauto" option for root device
	[  +0.296443] systemd-fstab-generator[3734]: Ignoring "noauto" option for root device
	[  +5.305544] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.120414] systemd-fstab-generator[4329]: Ignoring "noauto" option for root device
	[  +0.224634] systemd-fstab-generator[4341]: Ignoring "noauto" option for root device
	[  +0.228287] systemd-fstab-generator[4353]: Ignoring "noauto" option for root device
	[  +0.335547] systemd-fstab-generator[4368]: Ignoring "noauto" option for root device
	[  +0.958558] systemd-fstab-generator[4522]: Ignoring "noauto" option for root device
	[  +0.665138] kauditd_printk_skb: 142 callbacks suppressed
	[  +3.666533] systemd-fstab-generator[5314]: Ignoring "noauto" option for root device
	[  +1.382771] kauditd_printk_skb: 76 callbacks suppressed
	[  +5.726728] kauditd_printk_skb: 20 callbacks suppressed
	[ +11.732838] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.461109] systemd-fstab-generator[6328]: Ignoring "noauto" option for root device
	
	
	==> etcd [360bf6a69c98] <==
	{"level":"info","ts":"2024-04-20T00:15:27.633427Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"16.416442ms"}
	{"level":"info","ts":"2024-04-20T00:15:27.650102Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-20T00:15:27.66896Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"2456fb0657732b1a","local-member-id":"86f378078272eab2","commit-index":540}
	{"level":"info","ts":"2024-04-20T00:15:27.669042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86f378078272eab2 switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-20T00:15:27.669065Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86f378078272eab2 became follower at term 2"}
	{"level":"info","ts":"2024-04-20T00:15:27.6691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 86f378078272eab2 [peers: [], term: 2, commit: 540, applied: 0, lastindex: 540, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-20T00:15:27.679786Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-20T00:15:27.705491Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":509}
	{"level":"info","ts":"2024-04-20T00:15:27.717228Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-04-20T00:15:27.72595Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"86f378078272eab2","timeout":"7s"}
	{"level":"info","ts":"2024-04-20T00:15:27.726374Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"86f378078272eab2"}
	{"level":"info","ts":"2024-04-20T00:15:27.726499Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"86f378078272eab2","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-20T00:15:27.729027Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-20T00:15:27.729206Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-20T00:15:27.729298Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-20T00:15:27.729311Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-20T00:15:27.729542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86f378078272eab2 switched to configuration voters=(9724247994071706290)"}
	{"level":"info","ts":"2024-04-20T00:15:27.729618Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2456fb0657732b1a","local-member-id":"86f378078272eab2","added-peer-id":"86f378078272eab2","added-peer-peer-urls":["https://172.19.34.3:2380"]}
	{"level":"info","ts":"2024-04-20T00:15:27.729714Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2456fb0657732b1a","local-member-id":"86f378078272eab2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T00:15:27.729767Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T00:15:27.746475Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.34.3:2380"}
	{"level":"info","ts":"2024-04-20T00:15:27.74654Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.34.3:2380"}
	{"level":"info","ts":"2024-04-20T00:15:27.746953Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-20T00:15:27.753759Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-20T00:15:27.753708Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"86f378078272eab2","initial-advertise-peer-urls":["https://172.19.34.3:2380"],"listen-peer-urls":["https://172.19.34.3:2380"],"advertise-client-urls":["https://172.19.34.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.34.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	
	
	==> etcd [4f1883cafa8f] <==
	{"level":"info","ts":"2024-04-20T00:15:32.023008Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-20T00:15:32.024016Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-20T00:15:32.026155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86f378078272eab2 switched to configuration voters=(9724247994071706290)"}
	{"level":"info","ts":"2024-04-20T00:15:32.02648Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2456fb0657732b1a","local-member-id":"86f378078272eab2","added-peer-id":"86f378078272eab2","added-peer-peer-urls":["https://172.19.34.3:2380"]}
	{"level":"info","ts":"2024-04-20T00:15:32.029248Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2456fb0657732b1a","local-member-id":"86f378078272eab2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T00:15:32.029449Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T00:15:32.051856Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-20T00:15:32.052946Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.34.3:2380"}
	{"level":"info","ts":"2024-04-20T00:15:32.069272Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.34.3:2380"}
	{"level":"info","ts":"2024-04-20T00:15:32.071959Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"86f378078272eab2","initial-advertise-peer-urls":["https://172.19.34.3:2380"],"listen-peer-urls":["https://172.19.34.3:2380"],"advertise-client-urls":["https://172.19.34.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.34.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-20T00:15:32.07583Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-20T00:15:33.764879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86f378078272eab2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-20T00:15:33.765139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86f378078272eab2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-20T00:15:33.765164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86f378078272eab2 received MsgPreVoteResp from 86f378078272eab2 at term 2"}
	{"level":"info","ts":"2024-04-20T00:15:33.765179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86f378078272eab2 became candidate at term 3"}
	{"level":"info","ts":"2024-04-20T00:15:33.765191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86f378078272eab2 received MsgVoteResp from 86f378078272eab2 at term 3"}
	{"level":"info","ts":"2024-04-20T00:15:33.765203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86f378078272eab2 became leader at term 3"}
	{"level":"info","ts":"2024-04-20T00:15:33.765212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 86f378078272eab2 elected leader 86f378078272eab2 at term 3"}
	{"level":"info","ts":"2024-04-20T00:15:33.773742Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"86f378078272eab2","local-member-attributes":"{Name:functional-614300 ClientURLs:[https://172.19.34.3:2379]}","request-path":"/0/members/86f378078272eab2/attributes","cluster-id":"2456fb0657732b1a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-20T00:15:33.773775Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T00:15:33.774053Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-20T00:15:33.774075Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-20T00:15:33.774098Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T00:15:33.776492Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.34.3:2379"}
	{"level":"info","ts":"2024-04-20T00:15:33.776787Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 00:17:42 up 6 min,  0 users,  load average: 0.23, 0.32, 0.15
	Linux functional-614300 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0f79ade407a3] <==
	
	
	==> kube-apiserver [752e24cb6ba7] <==
	I0420 00:15:35.446056       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0420 00:15:35.446151       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0420 00:15:35.446162       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0420 00:15:35.447067       1 shared_informer.go:320] Caches are synced for configmaps
	I0420 00:15:35.452599       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0420 00:15:35.454623       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0420 00:15:35.454961       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0420 00:15:35.455768       1 aggregator.go:165] initial CRD sync complete...
	I0420 00:15:35.456025       1 autoregister_controller.go:141] Starting autoregister controller
	I0420 00:15:35.456171       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0420 00:15:35.456362       1 cache.go:39] Caches are synced for autoregister controller
	I0420 00:15:35.461825       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0420 00:15:35.464953       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0420 00:15:35.465734       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0420 00:15:35.465868       1 policy_source.go:224] refreshing policies
	I0420 00:15:35.466234       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0420 00:15:35.473475       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0420 00:15:36.250628       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0420 00:15:37.092526       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0420 00:15:37.115257       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0420 00:15:37.189341       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0420 00:15:37.274018       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0420 00:15:37.296090       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0420 00:15:48.583185       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0420 00:15:48.643534       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [27cc42670952] <==
	
	
	==> kube-controller-manager [397eb57684ca] <==
	I0420 00:15:48.631933       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0420 00:15:48.635092       1 shared_informer.go:320] Caches are synced for daemon sets
	I0420 00:15:48.638583       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0420 00:15:48.650545       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0420 00:15:48.652295       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0420 00:15:48.658745       1 shared_informer.go:320] Caches are synced for job
	I0420 00:15:48.678441       1 shared_informer.go:320] Caches are synced for cronjob
	I0420 00:15:48.703884       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0420 00:15:48.712416       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0420 00:15:48.747844       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0420 00:15:48.750504       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0420 00:15:48.754150       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0420 00:15:48.757788       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0420 00:15:48.779317       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0420 00:15:48.796997       1 shared_informer.go:320] Caches are synced for expand
	I0420 00:15:48.817988       1 shared_informer.go:320] Caches are synced for resource quota
	I0420 00:15:48.818268       1 shared_informer.go:320] Caches are synced for PVC protection
	I0420 00:15:48.821949       1 shared_informer.go:320] Caches are synced for stateful set
	I0420 00:15:48.828408       1 shared_informer.go:320] Caches are synced for resource quota
	I0420 00:15:48.838849       1 shared_informer.go:320] Caches are synced for persistent volume
	I0420 00:15:48.865437       1 shared_informer.go:320] Caches are synced for attach detach
	I0420 00:15:48.879175       1 shared_informer.go:320] Caches are synced for ephemeral
	I0420 00:15:49.261642       1 shared_informer.go:320] Caches are synced for garbage collector
	I0420 00:15:49.313487       1 shared_informer.go:320] Caches are synced for garbage collector
	I0420 00:15:49.313608       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [2a1a32f7f2fe] <==
	
	
	==> kube-proxy [a2618b0837f0] <==
	I0420 00:15:37.705941       1 server_linux.go:69] "Using iptables proxy"
	I0420 00:15:37.729593       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.34.3"]
	I0420 00:15:37.822959       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 00:15:37.822994       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 00:15:37.823012       1 server_linux.go:165] "Using iptables Proxier"
	I0420 00:15:37.827714       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 00:15:37.828035       1 server.go:872] "Version info" version="v1.30.0"
	I0420 00:15:37.828371       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:15:37.829966       1 config.go:192] "Starting service config controller"
	I0420 00:15:37.830512       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 00:15:37.830599       1 config.go:101] "Starting endpoint slice config controller"
	I0420 00:15:37.830675       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 00:15:37.831749       1 config.go:319] "Starting node config controller"
	I0420 00:15:37.831981       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 00:15:37.930881       1 shared_informer.go:320] Caches are synced for service config
	I0420 00:15:37.930875       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 00:15:37.932116       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [95cdd3967491] <==
	
	
	==> kube-scheduler [b6c209c5c994] <==
	I0420 00:15:32.623397       1 serving.go:380] Generated self-signed cert in-memory
	W0420 00:15:35.332548       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0420 00:15:35.332632       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 00:15:35.332644       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0420 00:15:35.332651       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0420 00:15:35.396451       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0420 00:15:35.399050       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:15:35.403022       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0420 00:15:35.403074       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0420 00:15:35.403321       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0420 00:15:35.403535       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0420 00:15:35.504403       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 20 00:15:35 functional-614300 kubelet[5336]: I0420 00:15:35.541204    5336 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 20 00:15:35 functional-614300 kubelet[5336]: E0420 00:15:35.635120    5336 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-functional-614300\" already exists" pod="kube-system/kube-apiserver-functional-614300"
	Apr 20 00:15:36 functional-614300 kubelet[5336]: I0420 00:15:36.015747    5336 apiserver.go:52] "Watching apiserver"
	Apr 20 00:15:36 functional-614300 kubelet[5336]: I0420 00:15:36.020532    5336 topology_manager.go:215] "Topology Admit Handler" podUID="9e920e7a-025c-40cd-8100-e279d31a6a36" podNamespace="kube-system" podName="kube-proxy-lrzcm"
	Apr 20 00:15:36 functional-614300 kubelet[5336]: I0420 00:15:36.022642    5336 topology_manager.go:215] "Topology Admit Handler" podUID="fd0e7b75-307a-47c4-9f4f-a24534fc157e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-b25zx"
	Apr 20 00:15:36 functional-614300 kubelet[5336]: I0420 00:15:36.022880    5336 topology_manager.go:215] "Topology Admit Handler" podUID="04f6a541-81e8-4d8a-bab2-51e0112a9d5c" podNamespace="kube-system" podName="storage-provisioner"
	Apr 20 00:15:36 functional-614300 kubelet[5336]: I0420 00:15:36.045047    5336 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 20 00:15:36 functional-614300 kubelet[5336]: I0420 00:15:36.063481    5336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e920e7a-025c-40cd-8100-e279d31a6a36-lib-modules\") pod \"kube-proxy-lrzcm\" (UID: \"9e920e7a-025c-40cd-8100-e279d31a6a36\") " pod="kube-system/kube-proxy-lrzcm"
	Apr 20 00:15:36 functional-614300 kubelet[5336]: I0420 00:15:36.063543    5336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e920e7a-025c-40cd-8100-e279d31a6a36-xtables-lock\") pod \"kube-proxy-lrzcm\" (UID: \"9e920e7a-025c-40cd-8100-e279d31a6a36\") " pod="kube-system/kube-proxy-lrzcm"
	Apr 20 00:15:36 functional-614300 kubelet[5336]: I0420 00:15:36.063595    5336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/04f6a541-81e8-4d8a-bab2-51e0112a9d5c-tmp\") pod \"storage-provisioner\" (UID: \"04f6a541-81e8-4d8a-bab2-51e0112a9d5c\") " pod="kube-system/storage-provisioner"
	Apr 20 00:15:37 functional-614300 kubelet[5336]: I0420 00:15:37.280727    5336 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="463c1e224e946e020c2fc82105f803a8da81c3b618f1a14b5d82b0c7f59d8fe2"
	Apr 20 00:15:37 functional-614300 kubelet[5336]: I0420 00:15:37.616360    5336 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a07aa207c4eb70290fbceff424a539a43c078967e95bd47acffec21d7b0f9b28"
	Apr 20 00:15:37 functional-614300 kubelet[5336]: I0420 00:15:37.638308    5336 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3162d8c75061c72e5dc13bb7dbb8f9d5ee06899c0a86a73be4f47509c8eac693"
	Apr 20 00:15:39 functional-614300 kubelet[5336]: I0420 00:15:39.724711    5336 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 20 00:15:46 functional-614300 kubelet[5336]: I0420 00:15:46.102232    5336 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 20 00:16:30 functional-614300 kubelet[5336]: E0420 00:16:30.152681    5336 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:16:30 functional-614300 kubelet[5336]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:16:30 functional-614300 kubelet[5336]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:16:30 functional-614300 kubelet[5336]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:16:30 functional-614300 kubelet[5336]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:17:30 functional-614300 kubelet[5336]: E0420 00:17:30.155220    5336 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:17:30 functional-614300 kubelet[5336]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:17:30 functional-614300 kubelet[5336]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:17:30 functional-614300 kubelet[5336]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:17:30 functional-614300 kubelet[5336]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [3d5cfc2d84df] <==
	I0420 00:15:37.561771       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0420 00:15:37.602892       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0420 00:15:37.604610       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0420 00:15:55.035544       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0420 00:15:55.036235       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-614300_8052ae08-d681-4ecc-82f4-59827fa59de1!
	I0420 00:15:55.036029       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c7a67278-b0bd-424d-9355-30642f505b33", APIVersion:"v1", ResourceVersion:"596", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-614300_8052ae08-d681-4ecc-82f4-59827fa59de1 became leader
	I0420 00:15:55.140998       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-614300_8052ae08-d681-4ecc-82f4-59827fa59de1!
	
	
	==> storage-provisioner [9eee3dd7d42a] <==
	I0420 00:15:27.109551       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0420 00:15:27.118842       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 17:17:34.790849   15276 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-614300 -n functional-614300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-614300 -n functional-614300: (11.8015078s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-614300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (33.65s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-614300 config unset cpus" to be -""- but got *"W0419 17:20:44.707866    2704 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-614300 config get cpus: exit status 14 (196.0725ms)

                                                
                                                
** stderr ** 
	W0419 17:20:44.941430    8016 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-614300 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0419 17:20:44.941430    8016 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-614300 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0419 17:20:45.125606    6364 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-614300 config get cpus" to be -""- but got *"W0419 17:20:45.343302    1644 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-614300 config unset cpus" to be -""- but got *"W0419 17:20:45.537984    8004 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-614300 config get cpus: exit status 14 (176.3478ms)

                                                
                                                
** stderr ** 
	W0419 17:20:45.731687    9212 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-614300 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0419 17:20:45.731687    9212 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-614300 service --namespace=default --https --url hello-node: exit status 1 (15.0344118s)

                                                
                                                
** stderr ** 
	W0419 17:21:29.163773    7656 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-614300 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-614300 service hello-node --url --format={{.IP}}: exit status 1 (15.03797s)

                                                
                                                
** stderr ** 
	W0419 17:21:44.231530   10360 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-614300 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-614300 service hello-node --url: exit status 1 (15.0558227s)

                                                
                                                
** stderr ** 
	W0419 17:21:59.248418   15160 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-614300 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (67.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-095800 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-095800 -- exec busybox-fc5497c4f-dxkjp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-095800 -- exec busybox-fc5497c4f-dxkjp -- sh -c "ping -c 1 172.19.32.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-095800 -- exec busybox-fc5497c4f-dxkjp -- sh -c "ping -c 1 172.19.32.1": exit status 1 (10.4360377s)

                                                
                                                
-- stdout --
	PING 172.19.32.1 (172.19.32.1): 56 data bytes
	
	--- 172.19.32.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 17:41:14.477946    7856 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.19.32.1) from pod (busybox-fc5497c4f-dxkjp): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-095800 -- exec busybox-fc5497c4f-l275w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-095800 -- exec busybox-fc5497c4f-l275w -- sh -c "ping -c 1 172.19.32.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-095800 -- exec busybox-fc5497c4f-l275w -- sh -c "ping -c 1 172.19.32.1": exit status 1 (10.4613524s)

                                                
                                                
-- stdout --
	PING 172.19.32.1 (172.19.32.1): 56 data bytes
	
	--- 172.19.32.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 17:41:25.409593    4572 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.19.32.1) from pod (busybox-fc5497c4f-l275w): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-095800 -- exec busybox-fc5497c4f-tmxkg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-095800 -- exec busybox-fc5497c4f-tmxkg -- sh -c "ping -c 1 172.19.32.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-095800 -- exec busybox-fc5497c4f-tmxkg -- sh -c "ping -c 1 172.19.32.1": exit status 1 (10.4663297s)

                                                
                                                
-- stdout --
	PING 172.19.32.1 (172.19.32.1): 56 data bytes
	
	--- 172.19.32.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 17:41:36.329232    4512 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.19.32.1) from pod (busybox-fc5497c4f-tmxkg): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-095800 -n ha-095800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-095800 -n ha-095800: (11.800881s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 logs -n 25: (8.6082788s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-614300                    | functional-614300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:24 PDT | 19 Apr 24 17:24 PDT |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-614300 image build -t     | functional-614300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:24 PDT | 19 Apr 24 17:24 PDT |
	|         | localhost/my-image:functional-614300 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-614300 image ls           | functional-614300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:24 PDT | 19 Apr 24 17:24 PDT |
	| delete  | -p functional-614300                 | functional-614300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:28 PDT | 19 Apr 24 17:29 PDT |
	| start   | -p ha-095800 --wait=true             | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:29 PDT | 19 Apr 24 17:40 PDT |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-095800 -- apply -f             | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:41 PDT | 19 Apr 24 17:41 PDT |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-095800 -- rollout status       | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:41 PDT | 19 Apr 24 17:41 PDT |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-095800 -- get pods -o          | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:41 PDT | 19 Apr 24 17:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-095800 -- get pods -o          | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:41 PDT | 19 Apr 24 17:41 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-095800 -- exec                 | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:41 PDT | 19 Apr 24 17:41 PDT |
	|         | busybox-fc5497c4f-dxkjp --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-095800 -- exec                 | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:41 PDT | 19 Apr 24 17:41 PDT |
	|         | busybox-fc5497c4f-l275w --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-095800 -- exec                 | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:41 PDT | 19 Apr 24 17:41 PDT |
	|         | busybox-fc5497c4f-tmxkg --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-095800 -- exec                 | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:41 PDT | 19 Apr 24 17:41 PDT |
	|         | busybox-fc5497c4f-dxkjp --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-095800 -- exec                 | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:41 PDT | 19 Apr 24 17:41 PDT |
	|         | busybox-fc5497c4f-l275w --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-095800 -- exec                 | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:41 PDT | 19 Apr 24 17:41 PDT |
	|         | busybox-fc5497c4f-tmxkg --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-095800 -- exec                 | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:41 PDT | 19 Apr 24 17:41 PDT |
	|         | busybox-fc5497c4f-dxkjp -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-095800 -- exec                 | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:41 PDT | 19 Apr 24 17:41 PDT |
	|         | busybox-fc5497c4f-l275w -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-095800 -- exec                 | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:41 PDT | 19 Apr 24 17:41 PDT |
	|         | busybox-fc5497c4f-tmxkg -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-095800 -- get pods -o          | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:41 PDT | 19 Apr 24 17:41 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-095800 -- exec                 | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:41 PDT | 19 Apr 24 17:41 PDT |
	|         | busybox-fc5497c4f-dxkjp              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-095800 -- exec                 | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:41 PDT |                     |
	|         | busybox-fc5497c4f-dxkjp -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.32.1             |                   |                   |         |                     |                     |
	| kubectl | -p ha-095800 -- exec                 | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:41 PDT | 19 Apr 24 17:41 PDT |
	|         | busybox-fc5497c4f-l275w              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-095800 -- exec                 | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:41 PDT |                     |
	|         | busybox-fc5497c4f-l275w -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.32.1             |                   |                   |         |                     |                     |
	| kubectl | -p ha-095800 -- exec                 | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:41 PDT | 19 Apr 24 17:41 PDT |
	|         | busybox-fc5497c4f-tmxkg              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-095800 -- exec                 | ha-095800         | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:41 PDT |                     |
	|         | busybox-fc5497c4f-tmxkg -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.32.1             |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 17:29:33
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 17:29:33.737511    6592 out.go:291] Setting OutFile to fd 796 ...
	I0419 17:29:33.738077    6592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 17:29:33.738077    6592 out.go:304] Setting ErrFile to fd 676...
	I0419 17:29:33.738077    6592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 17:29:33.767051    6592 out.go:298] Setting JSON to false
	I0419 17:29:33.770162    6592 start.go:129] hostinfo: {"hostname":"minikube1","uptime":11432,"bootTime":1713561541,"procs":203,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0419 17:29:33.770162    6592 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 17:29:33.776731    6592 out.go:177] * [ha-095800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0419 17:29:33.780567    6592 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 17:29:33.780330    6592 notify.go:220] Checking for updates...
	I0419 17:29:33.782570    6592 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 17:29:33.785497    6592 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0419 17:29:33.794155    6592 out.go:177]   - MINIKUBE_LOCATION=18703
	I0419 17:29:33.800159    6592 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 17:29:33.805983    6592 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 17:29:38.862125    6592 out.go:177] * Using the hyperv driver based on user configuration
	I0419 17:29:38.865579    6592 start.go:297] selected driver: hyperv
	I0419 17:29:38.865679    6592 start.go:901] validating driver "hyperv" against <nil>
	I0419 17:29:38.865679    6592 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 17:29:38.916290    6592 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 17:29:38.916567    6592 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 17:29:38.916567    6592 cni.go:84] Creating CNI manager for ""
	I0419 17:29:38.918279    6592 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0419 17:29:38.918279    6592 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0419 17:29:38.918279    6592 start.go:340] cluster config:
	{Name:ha-095800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0419 17:29:38.918771    6592 iso.go:125] acquiring lock: {Name:mk297f2abb67cbbcd36490c866afe693892d0c05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 17:29:38.923458    6592 out.go:177] * Starting "ha-095800" primary control-plane node in "ha-095800" cluster
	I0419 17:29:38.925781    6592 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 17:29:38.925781    6592 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0419 17:29:38.925781    6592 cache.go:56] Caching tarball of preloaded images
	I0419 17:29:38.926365    6592 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0419 17:29:38.926575    6592 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 17:29:38.927249    6592 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json ...
	I0419 17:29:38.927616    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json: {Name:mk391c2cfb27f78bbb8efde26cda996bf9a124b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:29:38.928919    6592 start.go:360] acquireMachinesLock for ha-095800: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 17:29:38.928919    6592 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-095800"
	I0419 17:29:38.928919    6592 start.go:93] Provisioning new machine with config: &{Name:ha-095800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 17:29:38.928919    6592 start.go:125] createHost starting for "" (driver="hyperv")
	I0419 17:29:38.933257    6592 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 17:29:38.933369    6592 start.go:159] libmachine.API.Create for "ha-095800" (driver="hyperv")
	I0419 17:29:38.933369    6592 client.go:168] LocalClient.Create starting
	I0419 17:29:38.934202    6592 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0419 17:29:38.934524    6592 main.go:141] libmachine: Decoding PEM data...
	I0419 17:29:38.934634    6592 main.go:141] libmachine: Parsing certificate...
	I0419 17:29:38.934956    6592 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0419 17:29:38.935297    6592 main.go:141] libmachine: Decoding PEM data...
	I0419 17:29:38.935352    6592 main.go:141] libmachine: Parsing certificate...
	I0419 17:29:38.935584    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0419 17:29:40.933666    6592 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0419 17:29:40.933776    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:29:40.933889    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0419 17:29:42.612549    6592 main.go:141] libmachine: [stdout =====>] : False
	
	I0419 17:29:42.612549    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:29:42.612890    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0419 17:29:44.087341    6592 main.go:141] libmachine: [stdout =====>] : True
	
	I0419 17:29:44.087437    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:29:44.087518    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0419 17:29:47.563367    6592 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0419 17:29:47.577029    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:29:47.580016    6592 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0419 17:29:48.099763    6592 main.go:141] libmachine: Creating SSH key...
	I0419 17:29:48.222908    6592 main.go:141] libmachine: Creating VM...
	I0419 17:29:48.222908    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0419 17:29:50.994194    6592 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0419 17:29:50.994194    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:29:51.007406    6592 main.go:141] libmachine: Using switch "Default Switch"
	I0419 17:29:51.007551    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0419 17:29:52.700662    6592 main.go:141] libmachine: [stdout =====>] : True
	
	I0419 17:29:52.700662    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:29:52.712365    6592 main.go:141] libmachine: Creating VHD
	I0419 17:29:52.712501    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\fixed.vhd' -SizeBytes 10MB -Fixed
	I0419 17:29:56.237483    6592 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F08DEC83-980D-4BE5-8EA1-B25D5E43548C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0419 17:29:56.251394    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:29:56.251394    6592 main.go:141] libmachine: Writing magic tar header
	I0419 17:29:56.251394    6592 main.go:141] libmachine: Writing SSH key tar header
	I0419 17:29:56.262614    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\disk.vhd' -VHDType Dynamic -DeleteSource
	I0419 17:29:59.296048    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:29:59.296048    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:29:59.308343    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\disk.vhd' -SizeBytes 20000MB
	I0419 17:30:01.712744    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:30:01.712744    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:01.712744    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-095800 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0419 17:30:05.249572    6592 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-095800 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0419 17:30:05.261821    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:05.261821    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-095800 -DynamicMemoryEnabled $false
	I0419 17:30:07.374924    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:30:07.374924    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:07.387730    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-095800 -Count 2
	I0419 17:30:09.435711    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:30:09.435711    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:09.449906    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-095800 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\boot2docker.iso'
	I0419 17:30:11.876539    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:30:11.876539    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:11.889177    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-095800 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\disk.vhd'
	I0419 17:30:14.398913    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:30:14.398913    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:14.398913    6592 main.go:141] libmachine: Starting VM...
	I0419 17:30:14.398913    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-095800
	I0419 17:30:17.322922    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:30:17.322922    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:17.322922    6592 main.go:141] libmachine: Waiting for host to start...
	I0419 17:30:17.322922    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:30:19.471590    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:30:19.471590    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:19.482265    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:30:21.902245    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:30:21.914923    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:22.923983    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:30:25.002814    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:30:25.002814    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:25.015384    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:30:27.470762    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:30:27.481984    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:28.496904    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:30:30.547826    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:30:30.547826    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:30.552836    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:30:32.944468    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:30:32.944468    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:33.952879    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:30:36.076685    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:30:36.076685    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:36.076685    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:30:38.595373    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:30:38.595373    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:39.605798    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:30:41.668562    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:30:41.668562    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:41.668562    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:30:44.076344    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:30:44.076344    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:44.076344    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:30:46.143599    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:30:46.143599    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:46.143599    6592 machine.go:94] provisionDockerMachine start ...
	I0419 17:30:46.155913    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:30:48.202827    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:30:48.215757    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:48.215757    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:30:50.660391    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:30:50.660391    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:50.679559    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:30:50.689982    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.218 22 <nil> <nil>}
	I0419 17:30:50.689982    6592 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 17:30:50.825859    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0419 17:30:50.825917    6592 buildroot.go:166] provisioning hostname "ha-095800"
	I0419 17:30:50.825968    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:30:52.842161    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:30:52.855687    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:52.855805    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:30:55.303389    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:30:55.303389    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:55.323989    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:30:55.324692    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.218 22 <nil> <nil>}
	I0419 17:30:55.324692    6592 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-095800 && echo "ha-095800" | sudo tee /etc/hostname
	I0419 17:30:55.487112    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-095800
	
	I0419 17:30:55.487216    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:30:57.471053    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:30:57.471053    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:57.483072    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:30:59.838191    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:30:59.838191    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:59.855926    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:30:59.855926    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.218 22 <nil> <nil>}
	I0419 17:30:59.856540    6592 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-095800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-095800/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-095800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 17:31:00.005424    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 17:31:00.005534    6592 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0419 17:31:00.005534    6592 buildroot.go:174] setting up certificates
	I0419 17:31:00.005614    6592 provision.go:84] configureAuth start
	I0419 17:31:00.005712    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:02.022080    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:02.022080    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:02.033255    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:04.466604    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:04.466604    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:04.479778    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:06.458998    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:06.470979    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:06.470979    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:08.936620    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:08.936620    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:08.949890    6592 provision.go:143] copyHostCerts
	I0419 17:31:08.950044    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0419 17:31:08.950335    6592 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0419 17:31:08.950527    6592 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0419 17:31:08.951082    6592 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0419 17:31:08.952145    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0419 17:31:08.952465    6592 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0419 17:31:08.952545    6592 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0419 17:31:08.952873    6592 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0419 17:31:08.953950    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0419 17:31:08.954506    6592 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0419 17:31:08.954506    6592 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0419 17:31:08.954714    6592 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0419 17:31:08.955735    6592 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-095800 san=[127.0.0.1 172.19.32.218 ha-095800 localhost minikube]
	I0419 17:31:09.094442    6592 provision.go:177] copyRemoteCerts
	I0419 17:31:09.114871    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 17:31:09.114871    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:11.108703    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:11.108703    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:11.120872    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:13.596760    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:13.596760    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:13.609796    6592 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:31:13.721337    6592 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6063999s)
	I0419 17:31:13.721447    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0419 17:31:13.721558    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0419 17:31:13.769222    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0419 17:31:13.770037    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0419 17:31:13.819018    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0419 17:31:13.819610    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 17:31:13.866596    6592 provision.go:87] duration metric: took 13.8608268s to configureAuth
	I0419 17:31:13.866596    6592 buildroot.go:189] setting minikube options for container-runtime
	I0419 17:31:13.867568    6592 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:31:13.867704    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:15.834901    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:15.834901    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:15.834901    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:18.285790    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:18.285869    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:18.291392    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:31:18.292178    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.218 22 <nil> <nil>}
	I0419 17:31:18.292178    6592 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0419 17:31:18.427821    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0419 17:31:18.427937    6592 buildroot.go:70] root file system type: tmpfs
	I0419 17:31:18.428091    6592 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0419 17:31:18.428307    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:20.422020    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:20.422020    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:20.434706    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:22.869543    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:22.869543    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:22.889098    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:31:22.889252    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.218 22 <nil> <nil>}
	I0419 17:31:22.889252    6592 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0419 17:31:23.056795    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0419 17:31:23.056886    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:25.048762    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:25.048762    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:25.048762    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:27.455906    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:27.455906    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:27.479079    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:31:27.479635    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.218 22 <nil> <nil>}
	I0419 17:31:27.479635    6592 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0419 17:31:29.587786    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0419 17:31:29.587863    6592 machine.go:97] duration metric: took 43.4441557s to provisionDockerMachine
	I0419 17:31:29.587894    6592 client.go:171] duration metric: took 1m50.6542478s to LocalClient.Create
	I0419 17:31:29.587990    6592 start.go:167] duration metric: took 1m50.6543445s to libmachine.API.Create "ha-095800"
	I0419 17:31:29.588033    6592 start.go:293] postStartSetup for "ha-095800" (driver="hyperv")
	I0419 17:31:29.588072    6592 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 17:31:29.602279    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 17:31:29.602279    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:31.584924    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:31.584924    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:31.584924    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:34.006039    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:34.006039    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:34.018121    6592 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:31:34.129289    6592 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5269634s)
	I0419 17:31:34.143743    6592 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 17:31:34.152466    6592 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 17:31:34.152466    6592 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0419 17:31:34.152466    6592 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0419 17:31:34.153942    6592 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> 34162.pem in /etc/ssl/certs
	I0419 17:31:34.153942    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /etc/ssl/certs/34162.pem
	I0419 17:31:34.166174    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 17:31:34.186683    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /etc/ssl/certs/34162.pem (1708 bytes)
	I0419 17:31:34.234425    6592 start.go:296] duration metric: took 4.6463804s for postStartSetup
	I0419 17:31:34.238214    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:36.226513    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:36.226513    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:36.226513    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:38.627616    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:38.627616    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:38.639535    6592 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json ...
	I0419 17:31:38.642656    6592 start.go:128] duration metric: took 1m59.713438s to createHost
	I0419 17:31:38.642742    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:40.623935    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:40.623935    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:40.636053    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:43.026686    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:43.026686    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:43.040936    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:31:43.040936    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.218 22 <nil> <nil>}
	I0419 17:31:43.045596    6592 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 17:31:43.186947    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713573103.186063964
	
	I0419 17:31:43.186947    6592 fix.go:216] guest clock: 1713573103.186063964
	I0419 17:31:43.186947    6592 fix.go:229] Guest: 2024-04-19 17:31:43.186063964 -0700 PDT Remote: 2024-04-19 17:31:38.6426563 -0700 PDT m=+125.010437401 (delta=4.543407664s)
	I0419 17:31:43.187472    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:45.151268    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:45.163524    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:45.163742    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:47.513964    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:47.527446    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:47.533827    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:31:47.534866    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.218 22 <nil> <nil>}
	I0419 17:31:47.534866    6592 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713573103
	I0419 17:31:47.692212    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: Sat Apr 20 00:31:43 UTC 2024
	
	I0419 17:31:47.692212    6592 fix.go:236] clock set: Sat Apr 20 00:31:43 UTC 2024
	 (err=<nil>)
	I0419 17:31:47.692212    6592 start.go:83] releasing machines lock for "ha-095800", held for 2m8.7629719s
	I0419 17:31:47.692803    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:49.668473    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:49.668473    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:49.668473    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:52.060494    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:52.074298    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:52.079190    6592 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 17:31:52.079190    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:52.088664    6592 ssh_runner.go:195] Run: cat /version.json
	I0419 17:31:52.088664    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:54.149791    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:54.162193    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:54.162193    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:54.162193    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:54.162193    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:54.162377    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:56.681532    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:56.693565    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:56.693565    6592 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:31:56.720933    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:56.722893    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:56.723199    6592 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:31:56.790418    6592 ssh_runner.go:235] Completed: cat /version.json: (4.7017426s)
	I0419 17:31:56.805060    6592 ssh_runner.go:195] Run: systemctl --version
	I0419 17:31:57.049513    6592 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9703101s)
	I0419 17:31:57.064050    6592 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 17:31:57.073407    6592 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 17:31:57.084623    6592 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 17:31:57.113231    6592 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 17:31:57.113231    6592 start.go:494] detecting cgroup driver to use...
	I0419 17:31:57.113231    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 17:31:57.165411    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0419 17:31:57.204163    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0419 17:31:57.229829    6592 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0419 17:31:57.243691    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0419 17:31:57.277897    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 17:31:57.313480    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0419 17:31:57.347097    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 17:31:57.379463    6592 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 17:31:57.415957    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0419 17:31:57.451592    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0419 17:31:57.488905    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0419 17:31:57.521807    6592 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 17:31:57.558339    6592 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 17:31:57.591650    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:31:57.784417    6592 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0419 17:31:57.817224    6592 start.go:494] detecting cgroup driver to use...
	I0419 17:31:57.830914    6592 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0419 17:31:57.869207    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 17:31:57.900389    6592 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 17:31:57.952329    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 17:31:57.992892    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 17:31:58.034281    6592 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0419 17:31:58.103157    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 17:31:58.128173    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 17:31:58.177907    6592 ssh_runner.go:195] Run: which cri-dockerd
	I0419 17:31:58.211830    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0419 17:31:58.230585    6592 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0419 17:31:58.281983    6592 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0419 17:31:58.482222    6592 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0419 17:31:58.670765    6592 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0419 17:31:58.670765    6592 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0419 17:31:58.716229    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:31:58.909018    6592 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 17:32:01.409530    6592 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5005051s)
	I0419 17:32:01.422052    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0419 17:32:01.457845    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 17:32:01.497065    6592 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0419 17:32:01.705185    6592 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0419 17:32:01.904644    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:32:02.102021    6592 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0419 17:32:02.146347    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 17:32:02.183141    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:32:02.377075    6592 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0419 17:32:02.484776    6592 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0419 17:32:02.503160    6592 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0419 17:32:02.511881    6592 start.go:562] Will wait 60s for crictl version
	I0419 17:32:02.527914    6592 ssh_runner.go:195] Run: which crictl
	I0419 17:32:02.546721    6592 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 17:32:02.601272    6592 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0419 17:32:02.612044    6592 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 17:32:02.652626    6592 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 17:32:02.688296    6592 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0419 17:32:02.688407    6592 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0419 17:32:02.693273    6592 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0419 17:32:02.693273    6592 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0419 17:32:02.693461    6592 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0419 17:32:02.693496    6592 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8c:b9:25 Flags:up|broadcast|multicast|running}
	I0419 17:32:02.696305    6592 ip.go:210] interface addr: fe80::ce04:318e:a1d8:4460/64
	I0419 17:32:02.696305    6592 ip.go:210] interface addr: 172.19.32.1/20
	I0419 17:32:02.712048    6592 ssh_runner.go:195] Run: grep 172.19.32.1	host.minikube.internal$ /etc/hosts
	I0419 17:32:02.718407    6592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.32.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 17:32:02.752465    6592 kubeadm.go:877] updating cluster {Name:ha-095800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:default APIServerHAVIP
:172.19.47.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.32.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 17:32:02.752465    6592 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 17:32:02.763495    6592 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0419 17:32:02.781982    6592 docker.go:685] Got preloaded images: 
	I0419 17:32:02.781982    6592 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0419 17:32:02.795234    6592 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0419 17:32:02.826514    6592 ssh_runner.go:195] Run: which lz4
	I0419 17:32:02.829232    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0419 17:32:02.846708    6592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0419 17:32:02.854109    6592 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0419 17:32:02.854302    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0419 17:32:05.022924    6592 docker.go:649] duration metric: took 2.1936873s to copy over tarball
	I0419 17:32:05.040914    6592 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0419 17:32:13.700030    6592 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6590938s)
	I0419 17:32:13.700030    6592 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0419 17:32:13.771028    6592 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0419 17:32:13.806461    6592 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0419 17:32:13.855109    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:32:14.070585    6592 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 17:32:17.549912    6592 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.4792825s)
	I0419 17:32:17.560824    6592 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0419 17:32:17.589467    6592 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0419 17:32:17.589467    6592 cache_images.go:84] Images are preloaded, skipping loading
	I0419 17:32:17.589467    6592 kubeadm.go:928] updating node { 172.19.32.218 8443 v1.30.0 docker true true} ...
	I0419 17:32:17.589996    6592 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-095800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.32.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:default APIServerHAVIP:172.19.47.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 17:32:17.601156    6592 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0419 17:32:17.638404    6592 cni.go:84] Creating CNI manager for ""
	I0419 17:32:17.638470    6592 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0419 17:32:17.638510    6592 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 17:32:17.638553    6592 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.32.218 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-095800 NodeName:ha-095800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.32.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.32.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 17:32:17.638849    6592 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.32.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-095800"
	  kubeletExtraArgs:
	    node-ip: 172.19.32.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.32.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 17:32:17.638849    6592 kube-vip.go:111] generating kube-vip config ...
	I0419 17:32:17.649737    6592 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0419 17:32:17.678556    6592 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0419 17:32:17.678863    6592 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.47.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0419 17:32:17.692988    6592 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 17:32:17.710078    6592 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 17:32:17.724534    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0419 17:32:17.743387    6592 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0419 17:32:17.772504    6592 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 17:32:17.800953    6592 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0419 17:32:17.830227    6592 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1351 bytes)
	I0419 17:32:17.877176    6592 ssh_runner.go:195] Run: grep 172.19.47.254	control-plane.minikube.internal$ /etc/hosts
	I0419 17:32:17.883915    6592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.47.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 17:32:17.918845    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:32:18.130718    6592 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 17:32:18.160872    6592 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800 for IP: 172.19.32.218
	I0419 17:32:18.160929    6592 certs.go:194] generating shared ca certs ...
	I0419 17:32:18.160929    6592 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:32:18.161559    6592 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0419 17:32:18.161902    6592 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0419 17:32:18.161902    6592 certs.go:256] generating profile certs ...
	I0419 17:32:18.162602    6592 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\client.key
	I0419 17:32:18.162602    6592 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\client.crt with IP's: []
	I0419 17:32:18.320917    6592 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\client.crt ...
	I0419 17:32:18.320917    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\client.crt: {Name:mk711b752ff52da904c50e38439fdc0151dc3ec3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:32:18.321965    6592 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\client.key ...
	I0419 17:32:18.321965    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\client.key: {Name:mk037074c22d8f8025321a73c62f0358f708eddd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:32:18.323722    6592 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.8667e9b8
	I0419 17:32:18.324788    6592 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.8667e9b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.32.218 172.19.47.254]
	I0419 17:32:18.424402    6592 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.8667e9b8 ...
	I0419 17:32:18.424402    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.8667e9b8: {Name:mkdbd8a4ad7a7a81f6e8f1b50d58f2d3833f9d81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:32:18.427894    6592 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.8667e9b8 ...
	I0419 17:32:18.427894    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.8667e9b8: {Name:mkb6551bebeb36a10c30482ef6ea1a13a9456a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:32:18.429217    6592 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.8667e9b8 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt
	I0419 17:32:18.434955    6592 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.8667e9b8 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key
	I0419 17:32:18.441440    6592 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key
	I0419 17:32:18.441440    6592 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.crt with IP's: []
	I0419 17:32:18.551033    6592 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.crt ...
	I0419 17:32:18.551033    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.crt: {Name:mk74da4d53e00801e4765e0c25e4bcf60f62806e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:32:18.555281    6592 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key ...
	I0419 17:32:18.555281    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key: {Name:mkdc1851d74dcae8a8a9dd44613b192a8632ad57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:32:18.556589    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 17:32:18.557021    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0419 17:32:18.557272    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 17:32:18.557407    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 17:32:18.557407    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0419 17:32:18.557407    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0419 17:32:18.557407    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0419 17:32:18.560275    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0419 17:32:18.566549    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem (1338 bytes)
	W0419 17:32:18.567271    6592 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416_empty.pem, impossibly tiny 0 bytes
	I0419 17:32:18.567494    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0419 17:32:18.567494    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0419 17:32:18.567494    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0419 17:32:18.568251    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0419 17:32:18.568397    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem (1708 bytes)
	I0419 17:32:18.568397    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem -> /usr/share/ca-certificates/3416.pem
	I0419 17:32:18.569025    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /usr/share/ca-certificates/34162.pem
	I0419 17:32:18.569158    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:32:18.569381    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 17:32:18.623928    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 17:32:18.673142    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 17:32:18.718851    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 17:32:18.768839    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0419 17:32:18.819335    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0419 17:32:18.868897    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 17:32:18.914112    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0419 17:32:18.960195    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem --> /usr/share/ca-certificates/3416.pem (1338 bytes)
	I0419 17:32:19.005762    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /usr/share/ca-certificates/34162.pem (1708 bytes)
	I0419 17:32:19.053902    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 17:32:19.094611    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0419 17:32:19.139248    6592 ssh_runner.go:195] Run: openssl version
	I0419 17:32:19.164542    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 17:32:19.197565    6592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:32:19.204064    6592 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 20 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:32:19.217377    6592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:32:19.240619    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 17:32:19.272150    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3416.pem && ln -fs /usr/share/ca-certificates/3416.pem /etc/ssl/certs/3416.pem"
	I0419 17:32:19.304312    6592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3416.pem
	I0419 17:32:19.311837    6592 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:10 /usr/share/ca-certificates/3416.pem
	I0419 17:32:19.326934    6592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3416.pem
	I0419 17:32:19.349883    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3416.pem /etc/ssl/certs/51391683.0"
	I0419 17:32:19.385234    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34162.pem && ln -fs /usr/share/ca-certificates/34162.pem /etc/ssl/certs/34162.pem"
	I0419 17:32:19.427826    6592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34162.pem
	I0419 17:32:19.437780    6592 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:10 /usr/share/ca-certificates/34162.pem
	I0419 17:32:19.451063    6592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34162.pem
	I0419 17:32:19.475648    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34162.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 17:32:19.521818    6592 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 17:32:19.528492    6592 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 17:32:19.528741    6592 kubeadm.go:391] StartCluster: {Name:ha-095800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:default APIServerHAVIP:17
2.19.47.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.32.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 17:32:19.538401    6592 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0419 17:32:19.570925    6592 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0419 17:32:19.606816    6592 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0419 17:32:19.638164    6592 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0419 17:32:19.655153    6592 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 17:32:19.655153    6592 kubeadm.go:156] found existing configuration files:
	
	I0419 17:32:19.666376    6592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0419 17:32:19.684915    6592 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 17:32:19.698874    6592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0419 17:32:19.732844    6592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0419 17:32:19.752165    6592 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 17:32:19.764074    6592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0419 17:32:19.796662    6592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0419 17:32:19.805026    6592 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 17:32:19.825011    6592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0419 17:32:19.860236    6592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0419 17:32:19.876952    6592 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 17:32:19.890910    6592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0419 17:32:19.907941    6592 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0419 17:32:20.388361    6592 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0419 17:32:34.760431    6592 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0419 17:32:34.760599    6592 kubeadm.go:309] [preflight] Running pre-flight checks
	I0419 17:32:34.760599    6592 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0419 17:32:34.760599    6592 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0419 17:32:34.761164    6592 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0419 17:32:34.761402    6592 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0419 17:32:34.764399    6592 out.go:204]   - Generating certificates and keys ...
	I0419 17:32:34.764654    6592 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0419 17:32:34.764856    6592 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0419 17:32:34.764856    6592 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0419 17:32:34.764856    6592 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0419 17:32:34.764856    6592 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0419 17:32:34.764856    6592 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0419 17:32:34.765495    6592 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0419 17:32:34.765673    6592 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-095800 localhost] and IPs [172.19.32.218 127.0.0.1 ::1]
	I0419 17:32:34.765673    6592 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0419 17:32:34.765673    6592 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-095800 localhost] and IPs [172.19.32.218 127.0.0.1 ::1]
	I0419 17:32:34.766427    6592 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0419 17:32:34.766615    6592 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0419 17:32:34.766768    6592 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0419 17:32:34.766910    6592 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0419 17:32:34.766910    6592 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0419 17:32:34.766910    6592 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0419 17:32:34.766910    6592 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0419 17:32:34.767431    6592 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0419 17:32:34.767522    6592 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0419 17:32:34.767619    6592 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0419 17:32:34.767619    6592 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0419 17:32:34.769918    6592 out.go:204]   - Booting up control plane ...
	I0419 17:32:34.769918    6592 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0419 17:32:34.769918    6592 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0419 17:32:34.770476    6592 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0419 17:32:34.770652    6592 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 17:32:34.770652    6592 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 17:32:34.770652    6592 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0419 17:32:34.770652    6592 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0419 17:32:34.770652    6592 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0419 17:32:34.771184    6592 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.004096977s
	I0419 17:32:34.771326    6592 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0419 17:32:34.771431    6592 kubeadm.go:309] [api-check] The API server is healthy after 7.502808092s
	I0419 17:32:34.771431    6592 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0419 17:32:34.771431    6592 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0419 17:32:34.771431    6592 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0419 17:32:34.772260    6592 kubeadm.go:309] [mark-control-plane] Marking the node ha-095800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0419 17:32:34.772569    6592 kubeadm.go:309] [bootstrap-token] Using token: 1vlilj.5gxlnz6bb5qp1ob8
	I0419 17:32:34.774298    6592 out.go:204]   - Configuring RBAC rules ...
	I0419 17:32:34.775110    6592 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0419 17:32:34.775306    6592 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0419 17:32:34.775306    6592 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0419 17:32:34.775850    6592 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0419 17:32:34.776046    6592 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0419 17:32:34.776046    6592 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0419 17:32:34.776046    6592 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0419 17:32:34.776602    6592 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0419 17:32:34.776602    6592 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0419 17:32:34.776602    6592 kubeadm.go:309] 
	I0419 17:32:34.776899    6592 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0419 17:32:34.776949    6592 kubeadm.go:309] 
	I0419 17:32:34.777207    6592 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0419 17:32:34.777261    6592 kubeadm.go:309] 
	I0419 17:32:34.777421    6592 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0419 17:32:34.777602    6592 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0419 17:32:34.777804    6592 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0419 17:32:34.777841    6592 kubeadm.go:309] 
	I0419 17:32:34.778021    6592 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0419 17:32:34.778054    6592 kubeadm.go:309] 
	I0419 17:32:34.778245    6592 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0419 17:32:34.778280    6592 kubeadm.go:309] 
	I0419 17:32:34.778448    6592 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0419 17:32:34.778734    6592 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0419 17:32:34.778979    6592 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0419 17:32:34.779011    6592 kubeadm.go:309] 
	I0419 17:32:34.779261    6592 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0419 17:32:34.779384    6592 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0419 17:32:34.779384    6592 kubeadm.go:309] 
	I0419 17:32:34.779507    6592 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 1vlilj.5gxlnz6bb5qp1ob8 \
	I0419 17:32:34.779632    6592 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 \
	I0419 17:32:34.779662    6592 kubeadm.go:309] 	--control-plane 
	I0419 17:32:34.779694    6592 kubeadm.go:309] 
	I0419 17:32:34.779785    6592 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0419 17:32:34.779817    6592 kubeadm.go:309] 
	I0419 17:32:34.779847    6592 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 1vlilj.5gxlnz6bb5qp1ob8 \
	I0419 17:32:34.779847    6592 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 
	I0419 17:32:34.779847    6592 cni.go:84] Creating CNI manager for ""
	I0419 17:32:34.779847    6592 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0419 17:32:34.782298    6592 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0419 17:32:34.801152    6592 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0419 17:32:34.809572    6592 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0419 17:32:34.809652    6592 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0419 17:32:34.857691    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0419 17:32:35.589912    6592 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0419 17:32:35.604538    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:35.607752    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-095800 minikube.k8s.io/updated_at=2024_04_19T17_32_35_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=ha-095800 minikube.k8s.io/primary=true
	I0419 17:32:35.624924    6592 ops.go:34] apiserver oom_adj: -16
	I0419 17:32:35.857311    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:36.375335    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:36.864680    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:37.363004    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:37.851042    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:38.358165    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:38.867346    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:39.355457    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:39.864501    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:40.360093    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:40.855528    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:41.353833    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:41.862142    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:42.366966    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:42.856795    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:43.350865    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:43.857246    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:44.364032    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:44.861124    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:45.365278    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:45.858185    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:46.360562    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:46.853694    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:47.005722    6592 kubeadm.go:1107] duration metric: took 11.4157203s to wait for elevateKubeSystemPrivileges
	W0419 17:32:47.005844    6592 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0419 17:32:47.005844    6592 kubeadm.go:393] duration metric: took 27.4770345s to StartCluster
	I0419 17:32:47.005900    6592 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:32:47.006087    6592 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 17:32:47.008060    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:32:47.010105    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0419 17:32:47.010105    6592 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.32.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 17:32:47.010105    6592 start.go:240] waiting for startup goroutines ...
	I0419 17:32:47.010105    6592 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0419 17:32:47.010105    6592 addons.go:69] Setting default-storageclass=true in profile "ha-095800"
	I0419 17:32:47.010105    6592 addons.go:69] Setting storage-provisioner=true in profile "ha-095800"
	I0419 17:32:47.010105    6592 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-095800"
	I0419 17:32:47.010105    6592 addons.go:234] Setting addon storage-provisioner=true in "ha-095800"
	I0419 17:32:47.010105    6592 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:32:47.010105    6592 host.go:66] Checking if "ha-095800" exists ...
	I0419 17:32:47.011167    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:32:47.011167    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:32:47.201363    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.32.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0419 17:32:47.647098    6592 start.go:946] {"host.minikube.internal": 172.19.32.1} host record injected into CoreDNS's ConfigMap
	I0419 17:32:49.139169    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:32:49.146189    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:32:49.146246    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:32:49.146295    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:32:49.148387    6592 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 17:32:49.147368    6592 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 17:32:49.151218    6592 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 17:32:49.151218    6592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0419 17:32:49.151218    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:32:49.151218    6592 kapi.go:59] client config for ha-095800: &rest.Config{Host:"https://172.19.47.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-095800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-095800\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c35620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 17:32:49.153289    6592 cert_rotation.go:137] Starting client certificate rotation controller
	I0419 17:32:49.153604    6592 addons.go:234] Setting addon default-storageclass=true in "ha-095800"
	I0419 17:32:49.153604    6592 host.go:66] Checking if "ha-095800" exists ...
	I0419 17:32:49.155387    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:32:51.332326    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:32:51.345224    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:32:51.345391    6592 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0419 17:32:51.345456    6592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0419 17:32:51.345456    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:32:51.487983    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:32:51.487983    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:32:51.487983    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:32:53.536400    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:32:53.536400    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:32:53.536400    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:32:54.049096    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:32:54.050078    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:32:54.050333    6592 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:32:54.191171    6592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 17:32:56.135414    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:32:56.135414    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:32:56.135414    6592 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:32:56.289538    6592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0419 17:32:56.466935    6592 round_trippers.go:463] GET https://172.19.47.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0419 17:32:56.466996    6592 round_trippers.go:469] Request Headers:
	I0419 17:32:56.467048    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:32:56.467048    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:32:56.483370    6592 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0419 17:32:56.484634    6592 round_trippers.go:463] PUT https://172.19.47.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0419 17:32:56.484677    6592 round_trippers.go:469] Request Headers:
	I0419 17:32:56.484760    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:32:56.484863    6592 round_trippers.go:473]     Content-Type: application/json
	I0419 17:32:56.484863    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:32:56.487612    6592 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:32:56.492063    6592 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0419 17:32:56.495558    6592 addons.go:505] duration metric: took 9.4854287s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0419 17:32:56.495633    6592 start.go:245] waiting for cluster config update ...
	I0419 17:32:56.495705    6592 start.go:254] writing updated cluster config ...
	I0419 17:32:56.499234    6592 out.go:177] 
	I0419 17:32:56.509956    6592 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:32:56.510141    6592 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json ...
	I0419 17:32:56.525772    6592 out.go:177] * Starting "ha-095800-m02" control-plane node in "ha-095800" cluster
	I0419 17:32:56.534136    6592 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 17:32:56.536794    6592 cache.go:56] Caching tarball of preloaded images
	I0419 17:32:56.537438    6592 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0419 17:32:56.537466    6592 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 17:32:56.537466    6592 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json ...
	I0419 17:32:56.540940    6592 start.go:360] acquireMachinesLock for ha-095800-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 17:32:56.541147    6592 start.go:364] duration metric: took 172.6µs to acquireMachinesLock for "ha-095800-m02"
	I0419 17:32:56.541391    6592 start.go:93] Provisioning new machine with config: &{Name:ha-095800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:def
ault APIServerHAVIP:172.19.47.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.32.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C
:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 17:32:56.541453    6592 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0419 17:32:56.543448    6592 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 17:32:56.543996    6592 start.go:159] libmachine.API.Create for "ha-095800" (driver="hyperv")
	I0419 17:32:56.544099    6592 client.go:168] LocalClient.Create starting
	I0419 17:32:56.544628    6592 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0419 17:32:56.544628    6592 main.go:141] libmachine: Decoding PEM data...
	I0419 17:32:56.544628    6592 main.go:141] libmachine: Parsing certificate...
	I0419 17:32:56.545234    6592 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0419 17:32:56.545234    6592 main.go:141] libmachine: Decoding PEM data...
	I0419 17:32:56.545234    6592 main.go:141] libmachine: Parsing certificate...
	I0419 17:32:56.545234    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0419 17:32:58.413308    6592 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0419 17:32:58.413308    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:32:58.413308    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0419 17:33:00.149585    6592 main.go:141] libmachine: [stdout =====>] : False
	
	I0419 17:33:00.149585    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:00.149585    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0419 17:33:01.608158    6592 main.go:141] libmachine: [stdout =====>] : True
	
	I0419 17:33:01.608158    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:01.617518    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0419 17:33:05.085817    6592 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0419 17:33:05.085817    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:05.088839    6592 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0419 17:33:05.559590    6592 main.go:141] libmachine: Creating SSH key...
	I0419 17:33:05.664153    6592 main.go:141] libmachine: Creating VM...
	I0419 17:33:05.664153    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0419 17:33:08.443948    6592 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0419 17:33:08.457064    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:08.457064    6592 main.go:141] libmachine: Using switch "Default Switch"
	I0419 17:33:08.457232    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0419 17:33:10.179571    6592 main.go:141] libmachine: [stdout =====>] : True
	
	I0419 17:33:10.179571    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:10.179571    6592 main.go:141] libmachine: Creating VHD
	I0419 17:33:10.179571    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0419 17:33:13.789883    6592 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : ED82E740-B20D-44DE-BD86-3F701B42C30A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0419 17:33:13.789883    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:13.789883    6592 main.go:141] libmachine: Writing magic tar header
	I0419 17:33:13.789883    6592 main.go:141] libmachine: Writing SSH key tar header
	I0419 17:33:13.790788    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0419 17:33:16.894244    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:16.894244    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:16.895366    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\disk.vhd' -SizeBytes 20000MB
	I0419 17:33:19.361031    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:19.361031    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:19.373223    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-095800-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0419 17:33:22.926626    6592 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-095800-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0419 17:33:22.939521    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:22.939521    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-095800-m02 -DynamicMemoryEnabled $false
	I0419 17:33:25.070246    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:25.082953    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:25.083076    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-095800-m02 -Count 2
	I0419 17:33:27.181634    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:27.181688    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:27.181688    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-095800-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\boot2docker.iso'
	I0419 17:33:29.709958    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:29.709958    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:29.710052    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-095800-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\disk.vhd'
	I0419 17:33:32.283608    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:32.297309    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:32.297309    6592 main.go:141] libmachine: Starting VM...
	I0419 17:33:32.297479    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-095800-m02
	I0419 17:33:35.382480    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:35.383660    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:35.383733    6592 main.go:141] libmachine: Waiting for host to start...
	I0419 17:33:35.383733    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:33:37.586849    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:33:37.586849    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:37.592459    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:33:40.064917    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:40.064917    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:41.070094    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:33:43.231331    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:33:43.231331    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:43.231556    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:33:45.716749    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:45.716749    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:46.718541    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:33:48.836277    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:33:48.837376    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:48.837376    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:33:51.296561    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:51.296561    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:52.312070    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:33:54.403261    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:33:54.412274    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:54.412274    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:33:56.897221    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:56.897221    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:57.913095    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:00.077382    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:00.077382    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:00.084599    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:02.611899    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:02.623900    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:02.624053    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:04.668991    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:04.668991    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:04.680368    6592 machine.go:94] provisionDockerMachine start ...
	I0419 17:34:04.680459    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:06.757918    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:06.757918    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:06.770308    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:09.238359    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:09.238359    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:09.256838    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:34:09.257560    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.39.106 22 <nil> <nil>}
	I0419 17:34:09.257560    6592 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 17:34:09.401524    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0419 17:34:09.401524    6592 buildroot.go:166] provisioning hostname "ha-095800-m02"
	I0419 17:34:09.401524    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:11.457086    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:11.457086    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:11.468691    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:13.954004    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:13.967037    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:13.973112    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:34:13.973891    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.39.106 22 <nil> <nil>}
	I0419 17:34:13.973891    6592 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-095800-m02 && echo "ha-095800-m02" | sudo tee /etc/hostname
	I0419 17:34:14.137164    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-095800-m02
	
	I0419 17:34:14.137293    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:16.184710    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:16.184710    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:16.184710    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:18.663350    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:18.663350    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:18.681601    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:34:18.682182    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.39.106 22 <nil> <nil>}
	I0419 17:34:18.682182    6592 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-095800-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-095800-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-095800-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 17:34:18.838880    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 17:34:18.838880    6592 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0419 17:34:18.838880    6592 buildroot.go:174] setting up certificates
	I0419 17:34:18.838880    6592 provision.go:84] configureAuth start
	I0419 17:34:18.838880    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:20.912372    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:20.912372    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:20.927907    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:23.411852    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:23.411852    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:23.423900    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:25.508105    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:25.508105    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:25.510392    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:28.012640    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:28.025708    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:28.025813    6592 provision.go:143] copyHostCerts
	I0419 17:34:28.025813    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0419 17:34:28.026369    6592 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0419 17:34:28.026369    6592 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0419 17:34:28.026842    6592 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0419 17:34:28.028066    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0419 17:34:28.028066    6592 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0419 17:34:28.028066    6592 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0419 17:34:28.028610    6592 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0419 17:34:28.029763    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0419 17:34:28.030023    6592 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0419 17:34:28.030023    6592 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0419 17:34:28.030499    6592 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0419 17:34:28.031502    6592 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-095800-m02 san=[127.0.0.1 172.19.39.106 ha-095800-m02 localhost minikube]
	I0419 17:34:28.208607    6592 provision.go:177] copyRemoteCerts
	I0419 17:34:28.216418    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 17:34:28.216418    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:30.273993    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:30.286804    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:30.286804    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:32.809138    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:32.809273    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:32.809332    6592 sshutil.go:53] new ssh client: &{IP:172.19.39.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\id_rsa Username:docker}
	I0419 17:34:32.926461    6592 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7100316s)
	I0419 17:34:32.926461    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0419 17:34:32.927040    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0419 17:34:32.973031    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0419 17:34:32.973573    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0419 17:34:33.023523    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0419 17:34:33.024749    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0419 17:34:33.074937    6592 provision.go:87] duration metric: took 14.2360223s to configureAuth
	I0419 17:34:33.075035    6592 buildroot.go:189] setting minikube options for container-runtime
	I0419 17:34:33.075110    6592 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:34:33.075110    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:35.106629    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:35.106629    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:35.106711    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:37.526573    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:37.535161    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:37.545127    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:34:37.545298    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.39.106 22 <nil> <nil>}
	I0419 17:34:37.545298    6592 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0419 17:34:37.682149    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0419 17:34:37.682149    6592 buildroot.go:70] root file system type: tmpfs
	I0419 17:34:37.682716    6592 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0419 17:34:37.682889    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:39.714333    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:39.714333    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:39.714333    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:42.177904    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:42.178005    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:42.182598    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:34:42.182598    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.39.106 22 <nil> <nil>}
	I0419 17:34:42.182598    6592 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.32.218"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0419 17:34:42.356000    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.32.218
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0419 17:34:42.356074    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:44.361364    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:44.361364    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:44.374941    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:46.828897    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:46.828897    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:46.835117    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:34:46.835590    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.39.106 22 <nil> <nil>}
	I0419 17:34:46.835590    6592 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0419 17:34:48.975232    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0419 17:34:48.975232    6592 machine.go:97] duration metric: took 44.2947577s to provisionDockerMachine
	I0419 17:34:48.975232    6592 client.go:171] duration metric: took 1m52.430861s to LocalClient.Create
	I0419 17:34:48.975232    6592 start.go:167] duration metric: took 1m52.430964s to libmachine.API.Create "ha-095800"
	I0419 17:34:48.975789    6592 start.go:293] postStartSetup for "ha-095800-m02" (driver="hyperv")
	I0419 17:34:48.975857    6592 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 17:34:48.990268    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 17:34:48.990268    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:51.012633    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:51.012633    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:51.012778    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:53.504206    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:53.504206    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:53.504309    6592 sshutil.go:53] new ssh client: &{IP:172.19.39.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\id_rsa Username:docker}
	I0419 17:34:53.623546    6592 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6332666s)
	I0419 17:34:53.640312    6592 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 17:34:53.648053    6592 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 17:34:53.648053    6592 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0419 17:34:53.648591    6592 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0419 17:34:53.649634    6592 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> 34162.pem in /etc/ssl/certs
	I0419 17:34:53.649634    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /etc/ssl/certs/34162.pem
	I0419 17:34:53.666517    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 17:34:53.685294    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /etc/ssl/certs/34162.pem (1708 bytes)
	I0419 17:34:53.732503    6592 start.go:296] duration metric: took 4.7567024s for postStartSetup
	I0419 17:34:53.735682    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:55.779501    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:55.779501    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:55.791981    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:58.257053    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:58.257053    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:58.259551    6592 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json ...
	I0419 17:34:58.276027    6592 start.go:128] duration metric: took 2m1.7340016s to createHost
	I0419 17:34:58.276182    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:35:00.339325    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:35:00.339325    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:00.339410    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:35:02.802652    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:35:02.814096    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:02.820618    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:35:02.821261    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.39.106 22 <nil> <nil>}
	I0419 17:35:02.821261    6592 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 17:35:02.956538    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713573302.947564889
	
	I0419 17:35:02.956538    6592 fix.go:216] guest clock: 1713573302.947564889
	I0419 17:35:02.956538    6592 fix.go:229] Guest: 2024-04-19 17:35:02.947564889 -0700 PDT Remote: 2024-04-19 17:34:58.2761069 -0700 PDT m=+324.643398501 (delta=4.671457989s)
	I0419 17:35:02.956538    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:35:04.981992    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:35:04.993935    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:04.994227    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:35:07.473549    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:35:07.485780    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:07.491553    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:35:07.492737    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.39.106 22 <nil> <nil>}
	I0419 17:35:07.492737    6592 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713573302
	I0419 17:35:07.640720    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: Sat Apr 20 00:35:02 UTC 2024
	
	I0419 17:35:07.640720    6592 fix.go:236] clock set: Sat Apr 20 00:35:02 UTC 2024
	 (err=<nil>)
	I0419 17:35:07.640720    6592 start.go:83] releasing machines lock for "ha-095800-m02", held for 2m11.0992556s
	I0419 17:35:07.641317    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:35:09.679388    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:35:09.688695    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:09.688869    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:35:12.129591    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:35:12.129591    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:12.141739    6592 out.go:177] * Found network options:
	I0419 17:35:12.147015    6592 out.go:177]   - NO_PROXY=172.19.32.218
	W0419 17:35:12.149450    6592 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 17:35:12.152667    6592 out.go:177]   - NO_PROXY=172.19.32.218
	W0419 17:35:12.155023    6592 proxy.go:119] fail to check proxy env: Error ip not in block
	W0419 17:35:12.156655    6592 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 17:35:12.160668    6592 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 17:35:12.161532    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:35:12.171567    6592 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0419 17:35:12.171567    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:35:14.233893    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:35:14.233893    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:14.234033    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:35:14.249163    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:35:14.257059    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:14.257191    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:35:16.773275    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:35:16.773275    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:16.786750    6592 sshutil.go:53] new ssh client: &{IP:172.19.39.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\id_rsa Username:docker}
	I0419 17:35:16.810451    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:35:16.812224    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:16.812276    6592 sshutil.go:53] new ssh client: &{IP:172.19.39.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\id_rsa Username:docker}
	I0419 17:35:16.938031    6592 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7773509s)
	I0419 17:35:16.938031    6592 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7664522s)
	W0419 17:35:16.938174    6592 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 17:35:16.951426    6592 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 17:35:16.975914    6592 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 17:35:16.982893    6592 start.go:494] detecting cgroup driver to use...
	I0419 17:35:16.982923    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 17:35:17.032304    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0419 17:35:17.071454    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0419 17:35:17.095266    6592 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0419 17:35:17.110092    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0419 17:35:17.148867    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 17:35:17.182282    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0419 17:35:17.220129    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 17:35:17.259774    6592 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 17:35:17.296484    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0419 17:35:17.330959    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0419 17:35:17.366377    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0419 17:35:17.402845    6592 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 17:35:17.438569    6592 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 17:35:17.478067    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:35:17.696568    6592 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0419 17:35:17.731857    6592 start.go:494] detecting cgroup driver to use...
	I0419 17:35:17.747208    6592 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0419 17:35:17.790020    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 17:35:17.830005    6592 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 17:35:17.879049    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 17:35:17.918467    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 17:35:17.962077    6592 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0419 17:35:18.029673    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 17:35:18.056357    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 17:35:18.107216    6592 ssh_runner.go:195] Run: which cri-dockerd
	I0419 17:35:18.129495    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0419 17:35:18.148830    6592 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0419 17:35:18.197633    6592 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0419 17:35:18.402292    6592 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0419 17:35:18.596698    6592 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0419 17:35:18.596698    6592 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0419 17:35:18.642989    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:35:18.855322    6592 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 17:35:21.409619    6592 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5542379s)
	I0419 17:35:21.423805    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0419 17:35:21.465962    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 17:35:21.506351    6592 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0419 17:35:21.718975    6592 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0419 17:35:21.918945    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:35:22.137750    6592 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0419 17:35:22.185651    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 17:35:22.225396    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:35:22.425690    6592 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0419 17:35:22.537635    6592 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0419 17:35:22.547991    6592 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0419 17:35:22.558080    6592 start.go:562] Will wait 60s for crictl version
	I0419 17:35:22.568341    6592 ssh_runner.go:195] Run: which crictl
	I0419 17:35:22.589384    6592 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 17:35:22.632396    6592 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0419 17:35:22.642172    6592 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 17:35:22.698356    6592 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 17:35:22.739421    6592 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0419 17:35:22.742431    6592 out.go:177]   - env NO_PROXY=172.19.32.218
	I0419 17:35:22.744839    6592 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0419 17:35:22.749574    6592 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0419 17:35:22.749574    6592 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0419 17:35:22.749717    6592 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0419 17:35:22.749717    6592 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8c:b9:25 Flags:up|broadcast|multicast|running}
	I0419 17:35:22.751741    6592 ip.go:210] interface addr: fe80::ce04:318e:a1d8:4460/64
	I0419 17:35:22.751741    6592 ip.go:210] interface addr: 172.19.32.1/20
	I0419 17:35:22.760477    6592 ssh_runner.go:195] Run: grep 172.19.32.1	host.minikube.internal$ /etc/hosts
	I0419 17:35:22.773484    6592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.32.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 17:35:22.790772    6592 mustload.go:65] Loading cluster: ha-095800
	I0419 17:35:22.790772    6592 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:35:22.798746    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:35:24.836383    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:35:24.836383    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:24.848764    6592 host.go:66] Checking if "ha-095800" exists ...
	I0419 17:35:24.849609    6592 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800 for IP: 172.19.39.106
	I0419 17:35:24.849609    6592 certs.go:194] generating shared ca certs ...
	I0419 17:35:24.849609    6592 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:35:24.850341    6592 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0419 17:35:24.850592    6592 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0419 17:35:24.850592    6592 certs.go:256] generating profile certs ...
	I0419 17:35:24.851261    6592 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\client.key
	I0419 17:35:24.851261    6592 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.87ccae9f
	I0419 17:35:24.851261    6592 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.87ccae9f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.32.218 172.19.39.106 172.19.47.254]
	I0419 17:35:25.097787    6592 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.87ccae9f ...
	I0419 17:35:25.097787    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.87ccae9f: {Name:mk23b04572e4fd34b587d1df7a9f07c1c4f91844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:35:25.105537    6592 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.87ccae9f ...
	I0419 17:35:25.105537    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.87ccae9f: {Name:mk1ae5628c1bb6755308a3a67f856b296285d46b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:35:25.106782    6592 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.87ccae9f -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt
	I0419 17:35:25.121574    6592 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.87ccae9f -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key
	I0419 17:35:25.123108    6592 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key
	I0419 17:35:25.123108    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 17:35:25.123108    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0419 17:35:25.123705    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 17:35:25.123705    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 17:35:25.124239    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0419 17:35:25.124521    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0419 17:35:25.124567    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0419 17:35:25.124567    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0419 17:35:25.125650    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem (1338 bytes)
	W0419 17:35:25.126122    6592 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416_empty.pem, impossibly tiny 0 bytes
	I0419 17:35:25.126197    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0419 17:35:25.126502    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0419 17:35:25.126820    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0419 17:35:25.127015    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0419 17:35:25.127558    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem (1708 bytes)
	I0419 17:35:25.127756    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem -> /usr/share/ca-certificates/3416.pem
	I0419 17:35:25.127959    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /usr/share/ca-certificates/34162.pem
	I0419 17:35:25.127959    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:35:25.127959    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:35:27.179721    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:35:27.182686    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:27.182792    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:35:29.677269    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:35:29.677269    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:29.691642    6592 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:35:29.808705    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0419 17:35:29.818028    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0419 17:35:29.851852    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0419 17:35:29.860192    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0419 17:35:29.899026    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0419 17:35:29.908596    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0419 17:35:29.944431    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0419 17:35:29.951490    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0419 17:35:29.985304    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0419 17:35:29.991980    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0419 17:35:30.024564    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0419 17:35:30.033365    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0419 17:35:30.057985    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 17:35:30.109679    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 17:35:30.167380    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 17:35:30.206495    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 17:35:30.266020    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0419 17:35:30.311495    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0419 17:35:30.374663    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 17:35:30.429114    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0419 17:35:30.486075    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem --> /usr/share/ca-certificates/3416.pem (1338 bytes)
	I0419 17:35:30.533124    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /usr/share/ca-certificates/34162.pem (1708 bytes)
	I0419 17:35:30.581671    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 17:35:30.625672    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0419 17:35:30.656210    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0419 17:35:30.693315    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0419 17:35:30.727155    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0419 17:35:30.760316    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0419 17:35:30.800343    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0419 17:35:30.831688    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0419 17:35:30.877019    6592 ssh_runner.go:195] Run: openssl version
	I0419 17:35:30.902910    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3416.pem && ln -fs /usr/share/ca-certificates/3416.pem /etc/ssl/certs/3416.pem"
	I0419 17:35:30.937753    6592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3416.pem
	I0419 17:35:30.945533    6592 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:10 /usr/share/ca-certificates/3416.pem
	I0419 17:35:30.958825    6592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3416.pem
	I0419 17:35:30.982855    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3416.pem /etc/ssl/certs/51391683.0"
	I0419 17:35:31.019450    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34162.pem && ln -fs /usr/share/ca-certificates/34162.pem /etc/ssl/certs/34162.pem"
	I0419 17:35:31.053730    6592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34162.pem
	I0419 17:35:31.064997    6592 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:10 /usr/share/ca-certificates/34162.pem
	I0419 17:35:31.079060    6592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34162.pem
	I0419 17:35:31.100760    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34162.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 17:35:31.135906    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 17:35:31.168805    6592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:35:31.176056    6592 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 20 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:35:31.189769    6592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:35:31.212767    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 17:35:31.254986    6592 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 17:35:31.261890    6592 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 17:35:31.262228    6592 kubeadm.go:928] updating node {m02 172.19.39.106 8443 v1.30.0 docker true true} ...
	I0419 17:35:31.262469    6592 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-095800-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:default APIServerHAVIP:172.19.47.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 17:35:31.262533    6592 kube-vip.go:111] generating kube-vip config ...
	I0419 17:35:31.276469    6592 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0419 17:35:31.302571    6592 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0419 17:35:31.302645    6592 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.47.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0419 17:35:31.313717    6592 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 17:35:31.334482    6592 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0419 17:35:31.349549    6592 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0419 17:35:31.374757    6592 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm
	I0419 17:35:31.374757    6592 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet
	I0419 17:35:31.374757    6592 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl
	I0419 17:35:32.388056    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0419 17:35:32.408987    6592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0419 17:35:32.410351    6592 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0419 17:35:32.418290    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0419 17:35:34.055018    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0419 17:35:34.079226    6592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0419 17:35:34.086778    6592 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0419 17:35:34.087039    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0419 17:35:36.092135    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 17:35:36.126993    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0419 17:35:36.140982    6592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0419 17:35:36.151934    6592 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0419 17:35:36.151934    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0419 17:35:36.764876    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0419 17:35:36.782288    6592 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0419 17:35:36.820042    6592 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 17:35:36.851191    6592 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
	I0419 17:35:36.897717    6592 ssh_runner.go:195] Run: grep 172.19.47.254	control-plane.minikube.internal$ /etc/hosts
	I0419 17:35:36.903584    6592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.47.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 17:35:36.939185    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:35:37.132075    6592 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 17:35:37.164877    6592 host.go:66] Checking if "ha-095800" exists ...
	I0419 17:35:37.165610    6592 start.go:316] joinCluster: &{Name:ha-095800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:default APIServerHAVIP:172.
19.47.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.32.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.39.106 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jen
kins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 17:35:37.166199    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0419 17:35:37.166355    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:35:39.198819    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:35:39.198819    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:39.210457    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:35:41.714026    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:35:41.714026    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:41.725963    6592 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:35:41.949562    6592 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.7832813s)
	I0419 17:35:41.949562    6592 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.19.39.106 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 17:35:41.949562    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n3zyqk.8dhrqnhr8ufhyc6l --discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-095800-m02 --control-plane --apiserver-advertise-address=172.19.39.106 --apiserver-bind-port=8443"
	I0419 17:36:25.521141    6592 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n3zyqk.8dhrqnhr8ufhyc6l --discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-095800-m02 --control-plane --apiserver-advertise-address=172.19.39.106 --apiserver-bind-port=8443": (43.5714167s)
	I0419 17:36:25.521203    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0419 17:36:26.306474    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-095800-m02 minikube.k8s.io/updated_at=2024_04_19T17_36_26_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=ha-095800 minikube.k8s.io/primary=false
	I0419 17:36:26.471324    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-095800-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0419 17:36:26.617852    6592 start.go:318] duration metric: took 49.4520716s to joinCluster
	I0419 17:36:26.618042    6592 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.19.39.106 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 17:36:26.620452    6592 out.go:177] * Verifying Kubernetes components...
	I0419 17:36:26.618488    6592 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:36:26.634759    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:36:26.965867    6592 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 17:36:26.992399    6592 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 17:36:26.993022    6592 kapi.go:59] client config for ha-095800: &rest.Config{Host:"https://172.19.47.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-095800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-095800\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c35620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0419 17:36:26.993230    6592 kubeadm.go:477] Overriding stale ClientConfig host https://172.19.47.254:8443 with https://172.19.32.218:8443
	I0419 17:36:26.993484    6592 node_ready.go:35] waiting up to 6m0s for node "ha-095800-m02" to be "Ready" ...
	I0419 17:36:26.994061    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:26.994061    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:26.994061    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:26.994061    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:27.009242    6592 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0419 17:36:27.503919    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:27.503985    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:27.503985    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:27.504018    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:27.509503    6592 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:36:27.996078    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:27.996078    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:27.996078    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:27.996078    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:28.002025    6592 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:36:28.507122    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:28.507122    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:28.507122    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:28.507291    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:28.512194    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:36:29.007292    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:29.007292    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:29.007292    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:29.007292    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:29.014078    6592 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:36:29.016864    6592 node_ready.go:53] node "ha-095800-m02" has status "Ready":"False"
	I0419 17:36:29.502218    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:29.502218    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:29.502218    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:29.502218    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:29.506540    6592 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:36:30.007826    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:30.007826    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:30.007826    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:30.007826    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:30.010044    6592 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:36:30.511873    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:30.511951    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:30.511994    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:30.511994    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:30.513795    6592 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:36:31.006568    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:31.006745    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:31.006745    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:31.006745    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:31.018441    6592 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0419 17:36:31.023740    6592 node_ready.go:53] node "ha-095800-m02" has status "Ready":"False"
	I0419 17:36:31.494606    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:31.494838    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:31.494838    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:31.494897    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:31.498401    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:36:31.997171    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:31.997206    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:31.997206    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:31.997206    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:32.001975    6592 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:36:32.510298    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:32.510367    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:32.510401    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:32.510401    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:32.515416    6592 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:36:33.004763    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:33.005020    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:33.005020    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:33.005020    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:33.008890    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:36:33.508603    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:33.508603    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:33.508603    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:33.508603    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:33.671262    6592 round_trippers.go:574] Response Status: 200 OK in 162 milliseconds
	I0419 17:36:33.684260    6592 node_ready.go:53] node "ha-095800-m02" has status "Ready":"False"
	I0419 17:36:34.003928    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:34.003928    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:34.003928    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:34.003928    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:34.039303    6592 round_trippers.go:574] Response Status: 200 OK in 35 milliseconds
	I0419 17:36:34.501746    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:34.501746    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:34.501746    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:34.501746    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:34.509305    6592 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 17:36:34.998216    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:34.998318    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:34.998318    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:34.998318    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:35.003330    6592 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:36:35.508011    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:35.508282    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:35.508282    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:35.508282    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:35.514142    6592 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:36:36.006539    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:36.006539    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:36.006539    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:36.006539    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:36.008482    6592 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:36:36.013454    6592 node_ready.go:53] node "ha-095800-m02" has status "Ready":"False"
	I0419 17:36:36.517783    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:36.517959    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:36.517959    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:36.517959    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:36.518493    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:37.006346    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:37.006346    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:37.006346    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:37.006346    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:37.006727    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:37.517049    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:37.517049    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:37.517139    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:37.517139    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:37.522813    6592 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:36:37.997763    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:37.997763    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:37.997763    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:37.997763    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.003365    6592 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:36:38.003966    6592 node_ready.go:49] node "ha-095800-m02" has status "Ready":"True"
	I0419 17:36:38.004099    6592 node_ready.go:38] duration metric: took 11.0100537s for node "ha-095800-m02" to be "Ready" ...
	I0419 17:36:38.004099    6592 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 17:36:38.004353    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods
	I0419 17:36:38.004353    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.004353    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.004413    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.016671    6592 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0419 17:36:38.030873    6592 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7mk28" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.030873    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7mk28
	I0419 17:36:38.030873    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.030873    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.030873    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.032596    6592 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:36:38.040724    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:38.040830    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.040830    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.040830    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.051702    6592 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0419 17:36:38.053343    6592 pod_ready.go:92] pod "coredns-7db6d8ff4d-7mk28" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:38.053343    6592 pod_ready.go:81] duration metric: took 22.4697ms for pod "coredns-7db6d8ff4d-7mk28" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.053401    6592 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vklb9" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.053524    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vklb9
	I0419 17:36:38.053524    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.053524    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.053582    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.058198    6592 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:36:38.061933    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:38.062058    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.062058    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.062058    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.062286    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:38.068485    6592 pod_ready.go:92] pod "coredns-7db6d8ff4d-vklb9" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:38.068557    6592 pod_ready.go:81] duration metric: took 15.0842ms for pod "coredns-7db6d8ff4d-vklb9" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.068557    6592 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.068629    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-095800
	I0419 17:36:38.068714    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.068714    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.068714    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.073133    6592 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:36:38.073273    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:38.073805    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.073805    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.073847    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.076446    6592 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:36:38.079503    6592 pod_ready.go:92] pod "etcd-ha-095800" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:38.079582    6592 pod_ready.go:81] duration metric: took 11.0251ms for pod "etcd-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.079582    6592 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.079655    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-095800-m02
	I0419 17:36:38.079730    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.079730    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.079730    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.080431    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:38.085921    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:38.085993    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.085993    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.085993    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.089401    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:36:38.090210    6592 pod_ready.go:92] pod "etcd-ha-095800-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:38.090210    6592 pod_ready.go:81] duration metric: took 10.6286ms for pod "etcd-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.090210    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.198670    6592 request.go:629] Waited for 107.6412ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800
	I0419 17:36:38.198872    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800
	I0419 17:36:38.198872    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.198872    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.198872    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.199582    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:38.413161    6592 request.go:629] Waited for 208.2183ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:38.413280    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:38.413280    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.413280    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.413280    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.413660    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:38.419496    6592 pod_ready.go:92] pod "kube-apiserver-ha-095800" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:38.419604    6592 pod_ready.go:81] duration metric: took 328.8362ms for pod "kube-apiserver-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.419604    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.608885    6592 request.go:629] Waited for 188.7472ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800-m02
	I0419 17:36:38.608969    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800-m02
	I0419 17:36:38.608969    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.608969    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.609090    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.615536    6592 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:36:38.809691    6592 request.go:629] Waited for 190.3486ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:38.809941    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:38.810111    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.810111    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.810111    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.810465    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:38.816133    6592 pod_ready.go:92] pod "kube-apiserver-ha-095800-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:38.816204    6592 pod_ready.go:81] duration metric: took 396.5278ms for pod "kube-apiserver-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.816204    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:39.012105    6592 request.go:629] Waited for 195.6204ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095800
	I0419 17:36:39.012225    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095800
	I0419 17:36:39.012225    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:39.012225    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:39.012225    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:39.012717    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:39.208236    6592 request.go:629] Waited for 188.8556ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:39.208346    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:39.208420    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:39.208420    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:39.208452    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:39.208865    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:39.214503    6592 pod_ready.go:92] pod "kube-controller-manager-ha-095800" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:39.214503    6592 pod_ready.go:81] duration metric: took 398.298ms for pod "kube-controller-manager-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:39.214503    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:39.407550    6592 request.go:629] Waited for 192.7473ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095800-m02
	I0419 17:36:39.407550    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095800-m02
	I0419 17:36:39.407550    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:39.407550    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:39.407550    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:39.414644    6592 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:36:39.608178    6592 request.go:629] Waited for 192.3441ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:39.608510    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:39.608576    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:39.608612    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:39.608612    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:39.622229    6592 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:36:39.622920    6592 pod_ready.go:92] pod "kube-controller-manager-ha-095800-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:39.622920    6592 pod_ready.go:81] duration metric: took 408.4159ms for pod "kube-controller-manager-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:39.622920    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4nldk" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:39.808013    6592 request.go:629] Waited for 184.7351ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4nldk
	I0419 17:36:39.808346    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4nldk
	I0419 17:36:39.808346    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:39.808427    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:39.808427    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:39.808672    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:40.009217    6592 request.go:629] Waited for 193.4195ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:40.009406    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:40.009406    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:40.009406    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:40.009406    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:40.009759    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:40.015867    6592 pod_ready.go:92] pod "kube-proxy-4nldk" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:40.015867    6592 pod_ready.go:81] duration metric: took 392.9461ms for pod "kube-proxy-4nldk" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:40.015867    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vq826" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:40.203356    6592 request.go:629] Waited for 187.2505ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vq826
	I0419 17:36:40.203572    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vq826
	I0419 17:36:40.203572    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:40.203572    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:40.203572    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:40.214446    6592 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0419 17:36:40.410827    6592 request.go:629] Waited for 192.5621ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:40.411036    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:40.411148    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:40.411182    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:40.411182    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:40.411498    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:40.417793    6592 pod_ready.go:92] pod "kube-proxy-vq826" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:40.418325    6592 pod_ready.go:81] duration metric: took 402.4574ms for pod "kube-proxy-vq826" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:40.418325    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:40.603529    6592 request.go:629] Waited for 184.8265ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095800
	I0419 17:36:40.603670    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095800
	I0419 17:36:40.603821    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:40.603821    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:40.603821    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:40.612277    6592 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 17:36:40.810889    6592 request.go:629] Waited for 196.7316ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:40.811027    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:40.811093    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:40.811169    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:40.811169    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:40.812713    6592 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:36:40.817079    6592 pod_ready.go:92] pod "kube-scheduler-ha-095800" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:40.817229    6592 pod_ready.go:81] duration metric: took 398.7525ms for pod "kube-scheduler-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:40.817229    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:41.001845    6592 request.go:629] Waited for 184.5114ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095800-m02
	I0419 17:36:41.002122    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095800-m02
	I0419 17:36:41.002122    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:41.002122    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:41.002122    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:41.002824    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:41.215011    6592 request.go:629] Waited for 206.5742ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:41.215011    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:41.215011    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:41.215011    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:41.215011    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:41.220661    6592 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:36:41.222751    6592 pod_ready.go:92] pod "kube-scheduler-ha-095800-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:41.222751    6592 pod_ready.go:81] duration metric: took 405.5202ms for pod "kube-scheduler-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:41.222751    6592 pod_ready.go:38] duration metric: took 3.2185249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 17:36:41.222751    6592 api_server.go:52] waiting for apiserver process to appear ...
	I0419 17:36:41.237304    6592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 17:36:41.277078    6592 api_server.go:72] duration metric: took 14.6588877s to wait for apiserver process to appear ...
	I0419 17:36:41.277130    6592 api_server.go:88] waiting for apiserver healthz status ...
	I0419 17:36:41.277180    6592 api_server.go:253] Checking apiserver healthz at https://172.19.32.218:8443/healthz ...
	I0419 17:36:41.283624    6592 api_server.go:279] https://172.19.32.218:8443/healthz returned 200:
	ok
	I0419 17:36:41.285414    6592 round_trippers.go:463] GET https://172.19.32.218:8443/version
	I0419 17:36:41.285414    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:41.285414    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:41.285414    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:41.285965    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:41.287754    6592 api_server.go:141] control plane version: v1.30.0
	I0419 17:36:41.287839    6592 api_server.go:131] duration metric: took 10.7087ms to wait for apiserver health ...
	I0419 17:36:41.287894    6592 system_pods.go:43] waiting for kube-system pods to appear ...
	I0419 17:36:41.408501    6592 request.go:629] Waited for 120.3424ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods
	I0419 17:36:41.408643    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods
	I0419 17:36:41.408643    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:41.408643    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:41.408643    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:41.409448    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:41.424834    6592 system_pods.go:59] 17 kube-system pods found
	I0419 17:36:41.424834    6592 system_pods.go:61] "coredns-7db6d8ff4d-7mk28" [e9d98fbb-21cc-4618-9709-0b27986c63b1] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "coredns-7db6d8ff4d-vklb9" [a1f46798-9bf9-4abe-9d6d-573902a0d373] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "etcd-ha-095800" [1aaf32fa-58bb-40f3-a162-21259eb4f376] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "etcd-ha-095800-m02" [5b0fc0be-2f86-4758-b8eb-aeb31245afd7] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kindnet-7j4cr" [92ce62b8-71b2-4deb-b295-cf938509a4e5] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kindnet-kpn69" [49ffd8bc-d455-4f64-9822-e2d363df7cc7] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kube-apiserver-ha-095800" [ebaad661-6759-415e-b65f-14d6ffb46853] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kube-apiserver-ha-095800-m02" [99267604-9885-472a-aab9-eda6b150457d] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kube-controller-manager-ha-095800" [dc9b9d64-b78b-44e3-a7f6-26ba6007b6dc] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kube-controller-manager-ha-095800-m02" [534ea924-2ff9-48ec-a02c-ce23e4c47324] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kube-proxy-4nldk" [79c714ec-b6ec-4cff-86fb-f560bed67202] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kube-proxy-vq826" [d2b22474-6974-4cbd-8565-95facc3c817e] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kube-scheduler-ha-095800" [af0f5d53-c6ab-4235-b9a2-ce0a371ff55f] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kube-scheduler-ha-095800-m02" [000d5f12-1c3f-41ba-b0dd-696da8c6b8ad] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kube-vip-ha-095800" [2fe74317-1ff4-4147-ae17-f2f31f4f06ba] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kube-vip-ha-095800-m02" [e80eec5a-c346-4f90-a843-b6ed2d111f0b] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "storage-provisioner" [f58269e6-1ef1-442a-972b-cc05662b174c] Running
	I0419 17:36:41.424834    6592 system_pods.go:74] duration metric: took 136.9397ms to wait for pod list to return data ...
	I0419 17:36:41.424834    6592 default_sa.go:34] waiting for default service account to be created ...
	I0419 17:36:41.607603    6592 request.go:629] Waited for 182.7682ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/default/serviceaccounts
	I0419 17:36:41.607603    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/default/serviceaccounts
	I0419 17:36:41.607603    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:41.607603    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:41.607603    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:41.608582    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:41.613379    6592 default_sa.go:45] found service account: "default"
	I0419 17:36:41.613505    6592 default_sa.go:55] duration metric: took 188.6706ms for default service account to be created ...
	I0419 17:36:41.613505    6592 system_pods.go:116] waiting for k8s-apps to be running ...
	I0419 17:36:41.822990    6592 request.go:629] Waited for 209.2696ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods
	I0419 17:36:41.823080    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods
	I0419 17:36:41.823080    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:41.823080    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:41.823080    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:41.830119    6592 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 17:36:41.839008    6592 system_pods.go:86] 17 kube-system pods found
	I0419 17:36:41.839586    6592 system_pods.go:89] "coredns-7db6d8ff4d-7mk28" [e9d98fbb-21cc-4618-9709-0b27986c63b1] Running
	I0419 17:36:41.839586    6592 system_pods.go:89] "coredns-7db6d8ff4d-vklb9" [a1f46798-9bf9-4abe-9d6d-573902a0d373] Running
	I0419 17:36:41.839586    6592 system_pods.go:89] "etcd-ha-095800" [1aaf32fa-58bb-40f3-a162-21259eb4f376] Running
	I0419 17:36:41.839586    6592 system_pods.go:89] "etcd-ha-095800-m02" [5b0fc0be-2f86-4758-b8eb-aeb31245afd7] Running
	I0419 17:36:41.839586    6592 system_pods.go:89] "kindnet-7j4cr" [92ce62b8-71b2-4deb-b295-cf938509a4e5] Running
	I0419 17:36:41.839586    6592 system_pods.go:89] "kindnet-kpn69" [49ffd8bc-d455-4f64-9822-e2d363df7cc7] Running
	I0419 17:36:41.839586    6592 system_pods.go:89] "kube-apiserver-ha-095800" [ebaad661-6759-415e-b65f-14d6ffb46853] Running
	I0419 17:36:41.839716    6592 system_pods.go:89] "kube-apiserver-ha-095800-m02" [99267604-9885-472a-aab9-eda6b150457d] Running
	I0419 17:36:41.839716    6592 system_pods.go:89] "kube-controller-manager-ha-095800" [dc9b9d64-b78b-44e3-a7f6-26ba6007b6dc] Running
	I0419 17:36:41.839716    6592 system_pods.go:89] "kube-controller-manager-ha-095800-m02" [534ea924-2ff9-48ec-a02c-ce23e4c47324] Running
	I0419 17:36:41.839766    6592 system_pods.go:89] "kube-proxy-4nldk" [79c714ec-b6ec-4cff-86fb-f560bed67202] Running
	I0419 17:36:41.839807    6592 system_pods.go:89] "kube-proxy-vq826" [d2b22474-6974-4cbd-8565-95facc3c817e] Running
	I0419 17:36:41.839807    6592 system_pods.go:89] "kube-scheduler-ha-095800" [af0f5d53-c6ab-4235-b9a2-ce0a371ff55f] Running
	I0419 17:36:41.839807    6592 system_pods.go:89] "kube-scheduler-ha-095800-m02" [000d5f12-1c3f-41ba-b0dd-696da8c6b8ad] Running
	I0419 17:36:41.839807    6592 system_pods.go:89] "kube-vip-ha-095800" [2fe74317-1ff4-4147-ae17-f2f31f4f06ba] Running
	I0419 17:36:41.839807    6592 system_pods.go:89] "kube-vip-ha-095800-m02" [e80eec5a-c346-4f90-a843-b6ed2d111f0b] Running
	I0419 17:36:41.839807    6592 system_pods.go:89] "storage-provisioner" [f58269e6-1ef1-442a-972b-cc05662b174c] Running
	I0419 17:36:41.839807    6592 system_pods.go:126] duration metric: took 226.3017ms to wait for k8s-apps to be running ...
	I0419 17:36:41.839807    6592 system_svc.go:44] waiting for kubelet service to be running ....
	I0419 17:36:41.848433    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 17:36:41.874746    6592 system_svc.go:56] duration metric: took 34.9388ms WaitForService to wait for kubelet
	I0419 17:36:41.874746    6592 kubeadm.go:576] duration metric: took 15.2566056s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 17:36:41.874746    6592 node_conditions.go:102] verifying NodePressure condition ...
	I0419 17:36:41.997978    6592 request.go:629] Waited for 123.0419ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes
	I0419 17:36:41.998034    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes
	I0419 17:36:41.998034    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:41.998034    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:41.998034    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:41.998565    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:42.004110    6592 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 17:36:42.004208    6592 node_conditions.go:123] node cpu capacity is 2
	I0419 17:36:42.004243    6592 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 17:36:42.004243    6592 node_conditions.go:123] node cpu capacity is 2
	I0419 17:36:42.004243    6592 node_conditions.go:105] duration metric: took 129.4965ms to run NodePressure ...
	I0419 17:36:42.004298    6592 start.go:240] waiting for startup goroutines ...
	I0419 17:36:42.004323    6592 start.go:254] writing updated cluster config ...
	I0419 17:36:42.008503    6592 out.go:177] 
	I0419 17:36:42.020747    6592 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:36:42.023505    6592 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json ...
	I0419 17:36:42.029464    6592 out.go:177] * Starting "ha-095800-m03" control-plane node in "ha-095800" cluster
	I0419 17:36:42.032504    6592 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 17:36:42.032646    6592 cache.go:56] Caching tarball of preloaded images
	I0419 17:36:42.032646    6592 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0419 17:36:42.033190    6592 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 17:36:42.033286    6592 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json ...
	I0419 17:36:42.037601    6592 start.go:360] acquireMachinesLock for ha-095800-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 17:36:42.039479    6592 start.go:364] duration metric: took 1.8778ms to acquireMachinesLock for "ha-095800-m03"
	I0419 17:36:42.039479    6592 start.go:93] Provisioning new machine with config: &{Name:ha-095800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:def
ault APIServerHAVIP:172.19.47.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.32.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.39.106 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false is
tio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 17:36:42.039479    6592 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0419 17:36:42.040643    6592 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 17:36:42.040643    6592 start.go:159] libmachine.API.Create for "ha-095800" (driver="hyperv")
	I0419 17:36:42.040643    6592 client.go:168] LocalClient.Create starting
	I0419 17:36:42.040643    6592 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0419 17:36:42.046431    6592 main.go:141] libmachine: Decoding PEM data...
	I0419 17:36:42.046431    6592 main.go:141] libmachine: Parsing certificate...
	I0419 17:36:42.046763    6592 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0419 17:36:42.047009    6592 main.go:141] libmachine: Decoding PEM data...
	I0419 17:36:42.047009    6592 main.go:141] libmachine: Parsing certificate...
	I0419 17:36:42.047135    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0419 17:36:43.932368    6592 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0419 17:36:43.932368    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:36:43.932561    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0419 17:36:45.687039    6592 main.go:141] libmachine: [stdout =====>] : False
	
	I0419 17:36:45.687039    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:36:45.687039    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0419 17:36:47.203723    6592 main.go:141] libmachine: [stdout =====>] : True
	
	I0419 17:36:47.211508    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:36:47.211508    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0419 17:36:50.951966    6592 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0419 17:36:50.964727    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:36:50.967214    6592 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0419 17:36:51.467126    6592 main.go:141] libmachine: Creating SSH key...
	I0419 17:36:51.639067    6592 main.go:141] libmachine: Creating VM...
	I0419 17:36:51.639499    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0419 17:36:54.554035    6592 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0419 17:36:54.565289    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:36:54.565289    6592 main.go:141] libmachine: Using switch "Default Switch"
	I0419 17:36:54.565462    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0419 17:36:56.328115    6592 main.go:141] libmachine: [stdout =====>] : True
	
	I0419 17:36:56.328115    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:36:56.337140    6592 main.go:141] libmachine: Creating VHD
	I0419 17:36:56.337140    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0419 17:36:59.947234    6592 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : DAD6BA86-FF7C-4654-8EED-887E8261B451
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0419 17:36:59.947234    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:36:59.947234    6592 main.go:141] libmachine: Writing magic tar header
	I0419 17:36:59.947234    6592 main.go:141] libmachine: Writing SSH key tar header
	I0419 17:36:59.955571    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0419 17:37:03.033601    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:03.033697    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:03.033697    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\disk.vhd' -SizeBytes 20000MB
	I0419 17:37:05.489037    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:05.489037    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:05.500648    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-095800-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0419 17:37:09.042127    6592 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-095800-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0419 17:37:09.042127    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:09.054998    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-095800-m03 -DynamicMemoryEnabled $false
	I0419 17:37:11.240904    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:11.240904    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:11.252370    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-095800-m03 -Count 2
	I0419 17:37:13.436540    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:13.436627    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:13.436627    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-095800-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\boot2docker.iso'
	I0419 17:37:15.921177    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:15.921177    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:15.932785    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-095800-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\disk.vhd'
	I0419 17:37:18.552415    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:18.552415    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:18.552415    6592 main.go:141] libmachine: Starting VM...
	I0419 17:37:18.554115    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-095800-m03
	I0419 17:37:21.570087    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:21.570087    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:21.570087    6592 main.go:141] libmachine: Waiting for host to start...
	I0419 17:37:21.582998    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:37:23.769670    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:37:23.769670    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:23.776327    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:37:26.236477    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:26.248019    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:27.251453    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:37:29.397765    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:37:29.409166    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:29.409166    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:37:31.927720    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:31.927720    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:32.935094    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:37:35.056916    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:37:35.056916    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:35.057406    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:37:37.526989    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:37.528182    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:38.542309    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:37:40.664594    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:37:40.664594    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:40.664936    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:37:43.135114    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:43.135114    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:44.160003    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:37:46.282915    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:37:46.287876    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:46.287876    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:37:48.812542    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:37:48.812634    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:48.812634    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:37:50.832585    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:37:50.832585    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:50.844748    6592 machine.go:94] provisionDockerMachine start ...
	I0419 17:37:50.844852    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:37:52.948535    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:37:52.948535    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:52.948676    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:37:55.469644    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:37:55.469644    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:55.483878    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:37:55.491497    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.152 22 <nil> <nil>}
	I0419 17:37:55.491497    6592 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 17:37:55.641932    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0419 17:37:55.641932    6592 buildroot.go:166] provisioning hostname "ha-095800-m03"
	I0419 17:37:55.642038    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:37:57.638660    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:37:57.638660    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:57.650318    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:00.126300    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:00.141381    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:00.148794    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:38:00.149323    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.152 22 <nil> <nil>}
	I0419 17:38:00.149323    6592 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-095800-m03 && echo "ha-095800-m03" | sudo tee /etc/hostname
	I0419 17:38:00.316085    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-095800-m03
	
	I0419 17:38:00.316203    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:02.358346    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:02.358346    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:02.358616    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:04.852449    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:04.852449    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:04.859428    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:38:04.860096    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.152 22 <nil> <nil>}
	I0419 17:38:04.860096    6592 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-095800-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-095800-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-095800-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 17:38:05.016232    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 17:38:05.016338    6592 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0419 17:38:05.016338    6592 buildroot.go:174] setting up certificates
	I0419 17:38:05.016441    6592 provision.go:84] configureAuth start
	I0419 17:38:05.016441    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:07.058195    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:07.070447    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:07.070447    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:09.545586    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:09.551380    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:09.551380    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:11.633519    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:11.633519    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:11.633519    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:14.185339    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:14.197210    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:14.197210    6592 provision.go:143] copyHostCerts
	I0419 17:38:14.197504    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0419 17:38:14.197957    6592 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0419 17:38:14.198076    6592 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0419 17:38:14.198584    6592 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0419 17:38:14.200260    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0419 17:38:14.200803    6592 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0419 17:38:14.201203    6592 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0419 17:38:14.201713    6592 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0419 17:38:14.203486    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0419 17:38:14.203882    6592 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0419 17:38:14.203882    6592 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0419 17:38:14.204519    6592 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0419 17:38:14.205446    6592 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-095800-m03 san=[127.0.0.1 172.19.47.152 ha-095800-m03 localhost minikube]
	I0419 17:38:14.367604    6592 provision.go:177] copyRemoteCerts
	I0419 17:38:14.389720    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 17:38:14.390006    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:16.418777    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:16.418777    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:16.418777    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:18.926535    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:18.938842    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:18.939168    6592 sshutil.go:53] new ssh client: &{IP:172.19.47.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\id_rsa Username:docker}
	I0419 17:38:19.052396    6592 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6626128s)
	I0419 17:38:19.052507    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0419 17:38:19.053004    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0419 17:38:19.102195    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0419 17:38:19.102770    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0419 17:38:19.152551    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0419 17:38:19.153168    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0419 17:38:19.199380    6592 provision.go:87] duration metric: took 14.1828613s to configureAuth
	I0419 17:38:19.199446    6592 buildroot.go:189] setting minikube options for container-runtime
	I0419 17:38:19.199681    6592 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:38:19.199681    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:21.279616    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:21.279616    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:21.287066    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:23.815497    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:23.815497    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:23.821891    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:38:23.822544    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.152 22 <nil> <nil>}
	I0419 17:38:23.822544    6592 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0419 17:38:23.967507    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0419 17:38:23.967642    6592 buildroot.go:70] root file system type: tmpfs
	I0419 17:38:23.967845    6592 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0419 17:38:23.967845    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:26.030688    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:26.030856    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:26.030944    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:28.496213    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:28.496213    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:28.515064    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:38:28.515064    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.152 22 <nil> <nil>}
	I0419 17:38:28.515064    6592 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.32.218"
	Environment="NO_PROXY=172.19.32.218,172.19.39.106"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0419 17:38:28.685193    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.32.218
	Environment=NO_PROXY=172.19.32.218,172.19.39.106
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0419 17:38:28.685321    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:30.749145    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:30.749145    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:30.749351    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:33.236418    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:33.248856    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:33.256122    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:38:33.256927    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.152 22 <nil> <nil>}
	I0419 17:38:33.256995    6592 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0419 17:38:35.396561    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0419 17:38:35.396561    6592 machine.go:97] duration metric: took 44.5517062s to provisionDockerMachine
	I0419 17:38:35.396561    6592 client.go:171] duration metric: took 1m53.3556453s to LocalClient.Create
	I0419 17:38:35.397119    6592 start.go:167] duration metric: took 1m53.3562038s to libmachine.API.Create "ha-095800"
	I0419 17:38:35.397188    6592 start.go:293] postStartSetup for "ha-095800-m03" (driver="hyperv")
	I0419 17:38:35.397188    6592 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 17:38:35.411546    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 17:38:35.411546    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:37.453290    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:37.453290    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:37.465128    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:39.917137    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:39.917137    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:39.928497    6592 sshutil.go:53] new ssh client: &{IP:172.19.47.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\id_rsa Username:docker}
	I0419 17:38:40.057982    6592 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6464251s)
	I0419 17:38:40.074420    6592 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 17:38:40.082216    6592 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 17:38:40.082216    6592 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0419 17:38:40.082888    6592 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0419 17:38:40.083948    6592 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> 34162.pem in /etc/ssl/certs
	I0419 17:38:40.083948    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /etc/ssl/certs/34162.pem
	I0419 17:38:40.095395    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 17:38:40.116711    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /etc/ssl/certs/34162.pem (1708 bytes)
	I0419 17:38:40.165058    6592 start.go:296] duration metric: took 4.7678579s for postStartSetup
	I0419 17:38:40.168238    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:42.185258    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:42.185474    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:42.185474    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:44.649522    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:44.649522    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:44.649902    6592 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json ...
	I0419 17:38:44.652645    6592 start.go:128] duration metric: took 2m2.6128718s to createHost
	I0419 17:38:44.652743    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:46.664593    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:46.675511    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:46.675511    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:49.156680    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:49.156680    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:49.162909    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:38:49.163518    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.152 22 <nil> <nil>}
	I0419 17:38:49.163563    6592 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 17:38:49.298602    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713573529.295299619
	
	I0419 17:38:49.298602    6592 fix.go:216] guest clock: 1713573529.295299619
	I0419 17:38:49.298602    6592 fix.go:229] Guest: 2024-04-19 17:38:49.295299619 -0700 PDT Remote: 2024-04-19 17:38:44.6526452 -0700 PDT m=+551.019393501 (delta=4.642654419s)
	I0419 17:38:49.298602    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:51.293675    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:51.293794    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:51.293794    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:53.737513    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:53.737513    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:53.754826    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:38:53.755448    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.152 22 <nil> <nil>}
	I0419 17:38:53.755448    6592 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713573529
	I0419 17:38:53.913899    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: Sat Apr 20 00:38:49 UTC 2024
	
	I0419 17:38:53.914022    6592 fix.go:236] clock set: Sat Apr 20 00:38:49 UTC 2024
	 (err=<nil>)
	I0419 17:38:53.914022    6592 start.go:83] releasing machines lock for "ha-095800-m03", held for 2m11.8742264s
	I0419 17:38:53.914208    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:55.944490    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:55.944490    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:55.944490    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:58.428942    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:58.428942    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:58.431539    6592 out.go:177] * Found network options:
	I0419 17:38:58.434241    6592 out.go:177]   - NO_PROXY=172.19.32.218,172.19.39.106
	W0419 17:38:58.434433    6592 proxy.go:119] fail to check proxy env: Error ip not in block
	W0419 17:38:58.434433    6592 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 17:38:58.439069    6592 out.go:177]   - NO_PROXY=172.19.32.218,172.19.39.106
	W0419 17:38:58.444061    6592 proxy.go:119] fail to check proxy env: Error ip not in block
	W0419 17:38:58.444061    6592 proxy.go:119] fail to check proxy env: Error ip not in block
	W0419 17:38:58.445233    6592 proxy.go:119] fail to check proxy env: Error ip not in block
	W0419 17:38:58.445233    6592 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 17:38:58.446928    6592 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 17:38:58.446928    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:58.452204    6592 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0419 17:38:58.452204    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:39:00.570988    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:39:00.570988    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:39:00.571108    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:39:00.571108    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:39:00.571108    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:39:00.571108    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:39:03.123020    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:39:03.123147    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:39:03.123379    6592 sshutil.go:53] new ssh client: &{IP:172.19.47.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\id_rsa Username:docker}
	I0419 17:39:03.179727    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:39:03.181234    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:39:03.181478    6592 sshutil.go:53] new ssh client: &{IP:172.19.47.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\id_rsa Username:docker}
	I0419 17:39:03.222672    6592 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7703898s)
	W0419 17:39:03.222742    6592 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 17:39:03.237311    6592 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 17:39:03.347132    6592 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 17:39:03.347132    6592 start.go:494] detecting cgroup driver to use...
	I0419 17:39:03.347132    6592 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.900192s)
	I0419 17:39:03.347132    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 17:39:03.397019    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0419 17:39:03.435054    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0419 17:39:03.457045    6592 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0419 17:39:03.470340    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0419 17:39:03.503878    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 17:39:03.543272    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0419 17:39:03.577110    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 17:39:03.612714    6592 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 17:39:03.650344    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0419 17:39:03.679904    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0419 17:39:03.717614    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0419 17:39:03.762410    6592 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 17:39:03.794589    6592 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 17:39:03.827744    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:39:04.022133    6592 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0419 17:39:04.042164    6592 start.go:494] detecting cgroup driver to use...
	I0419 17:39:04.073695    6592 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0419 17:39:04.108778    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 17:39:04.146635    6592 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 17:39:04.196087    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 17:39:04.237093    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 17:39:04.274071    6592 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0419 17:39:04.337576    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 17:39:04.364266    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 17:39:04.416875    6592 ssh_runner.go:195] Run: which cri-dockerd
	I0419 17:39:04.436691    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0419 17:39:04.454448    6592 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0419 17:39:04.497202    6592 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0419 17:39:04.697669    6592 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0419 17:39:04.891377    6592 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0419 17:39:04.891377    6592 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0419 17:39:04.945548    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:39:05.159643    6592 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 17:39:07.689312    6592 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5296124s)
	I0419 17:39:07.703394    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0419 17:39:07.745691    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 17:39:07.783047    6592 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0419 17:39:07.990577    6592 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0419 17:39:08.193850    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:39:08.394421    6592 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0419 17:39:08.438171    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 17:39:08.477581    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:39:08.676485    6592 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0419 17:39:08.785823    6592 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0419 17:39:08.801526    6592 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0419 17:39:08.813074    6592 start.go:562] Will wait 60s for crictl version
	I0419 17:39:08.826354    6592 ssh_runner.go:195] Run: which crictl
	I0419 17:39:08.845207    6592 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 17:39:08.902855    6592 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0419 17:39:08.914068    6592 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 17:39:08.958593    6592 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 17:39:08.992203    6592 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0419 17:39:08.994755    6592 out.go:177]   - env NO_PROXY=172.19.32.218
	I0419 17:39:08.997378    6592 out.go:177]   - env NO_PROXY=172.19.32.218,172.19.39.106
	I0419 17:39:08.998835    6592 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0419 17:39:09.001576    6592 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0419 17:39:09.001576    6592 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0419 17:39:09.001576    6592 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0419 17:39:09.001576    6592 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8c:b9:25 Flags:up|broadcast|multicast|running}
	I0419 17:39:09.006993    6592 ip.go:210] interface addr: fe80::ce04:318e:a1d8:4460/64
	I0419 17:39:09.006993    6592 ip.go:210] interface addr: 172.19.32.1/20
	I0419 17:39:09.018531    6592 ssh_runner.go:195] Run: grep 172.19.32.1	host.minikube.internal$ /etc/hosts
	I0419 17:39:09.025883    6592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.32.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 17:39:09.053323    6592 mustload.go:65] Loading cluster: ha-095800
	I0419 17:39:09.054137    6592 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:39:09.054235    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:39:11.118523    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:39:11.118639    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:39:11.118639    6592 host.go:66] Checking if "ha-095800" exists ...
	I0419 17:39:11.119458    6592 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800 for IP: 172.19.47.152
	I0419 17:39:11.119458    6592 certs.go:194] generating shared ca certs ...
	I0419 17:39:11.119458    6592 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:39:11.120259    6592 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0419 17:39:11.120259    6592 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0419 17:39:11.120259    6592 certs.go:256] generating profile certs ...
	I0419 17:39:11.121626    6592 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\client.key
	I0419 17:39:11.121853    6592 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.ef4167b0
	I0419 17:39:11.121982    6592 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.ef4167b0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.32.218 172.19.39.106 172.19.47.152 172.19.47.254]
	I0419 17:39:11.213754    6592 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.ef4167b0 ...
	I0419 17:39:11.213754    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.ef4167b0: {Name:mk764ccec1a095eae423822d018e7356d3a6c394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:39:11.216559    6592 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.ef4167b0 ...
	I0419 17:39:11.216559    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.ef4167b0: {Name:mkaa0fbf04b32aade596377c008e33461f7877fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:39:11.217442    6592 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.ef4167b0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt
	I0419 17:39:11.224115    6592 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.ef4167b0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key
	I0419 17:39:11.230756    6592 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key
	I0419 17:39:11.230756    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 17:39:11.230756    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0419 17:39:11.232684    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 17:39:11.232937    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 17:39:11.232937    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0419 17:39:11.233261    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0419 17:39:11.233446    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0419 17:39:11.233446    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0419 17:39:11.233446    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem (1338 bytes)
	W0419 17:39:11.234755    6592 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416_empty.pem, impossibly tiny 0 bytes
	I0419 17:39:11.234755    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0419 17:39:11.235306    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0419 17:39:11.235621    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0419 17:39:11.235976    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0419 17:39:11.236203    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem (1708 bytes)
	I0419 17:39:11.236690    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /usr/share/ca-certificates/34162.pem
	I0419 17:39:11.236896    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:39:11.237111    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem -> /usr/share/ca-certificates/3416.pem
	I0419 17:39:11.237289    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:39:13.291863    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:39:13.291863    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:39:13.291863    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:39:15.825166    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:39:15.825166    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:39:15.825293    6592 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:39:15.945571    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0419 17:39:15.955555    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0419 17:39:15.996290    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0419 17:39:16.007312    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0419 17:39:16.043327    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0419 17:39:16.053007    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0419 17:39:16.083976    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0419 17:39:16.094316    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0419 17:39:16.133200    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0419 17:39:16.143100    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0419 17:39:16.176803    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0419 17:39:16.185991    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0419 17:39:16.206974    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 17:39:16.255804    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 17:39:16.306403    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 17:39:16.357156    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 17:39:16.403469    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0419 17:39:16.449096    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0419 17:39:16.496831    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 17:39:16.543353    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0419 17:39:16.597157    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /usr/share/ca-certificates/34162.pem (1708 bytes)
	I0419 17:39:16.643894    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 17:39:16.694387    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem --> /usr/share/ca-certificates/3416.pem (1338 bytes)
	I0419 17:39:16.739429    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0419 17:39:16.771610    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0419 17:39:16.804794    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0419 17:39:16.837960    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0419 17:39:16.875078    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0419 17:39:16.904553    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0419 17:39:16.946580    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0419 17:39:16.992453    6592 ssh_runner.go:195] Run: openssl version
	I0419 17:39:17.018985    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34162.pem && ln -fs /usr/share/ca-certificates/34162.pem /etc/ssl/certs/34162.pem"
	I0419 17:39:17.057785    6592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34162.pem
	I0419 17:39:17.066127    6592 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:10 /usr/share/ca-certificates/34162.pem
	I0419 17:39:17.077350    6592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34162.pem
	I0419 17:39:17.102518    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34162.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 17:39:17.141032    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 17:39:17.175317    6592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:39:17.183890    6592 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 20 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:39:17.197519    6592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:39:17.218226    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 17:39:17.258654    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3416.pem && ln -fs /usr/share/ca-certificates/3416.pem /etc/ssl/certs/3416.pem"
	I0419 17:39:17.295567    6592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3416.pem
	I0419 17:39:17.302678    6592 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:10 /usr/share/ca-certificates/3416.pem
	I0419 17:39:17.314421    6592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3416.pem
	I0419 17:39:17.338964    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3416.pem /etc/ssl/certs/51391683.0"
	I0419 17:39:17.374864    6592 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 17:39:17.381271    6592 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 17:39:17.381271    6592 kubeadm.go:928] updating node {m03 172.19.47.152 8443 v1.30.0 docker true true} ...
	I0419 17:39:17.381907    6592 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-095800-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.47.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:default APIServerHAVIP:172.19.47.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 17:39:17.381978    6592 kube-vip.go:111] generating kube-vip config ...
	I0419 17:39:17.394978    6592 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0419 17:39:17.421717    6592 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0419 17:39:17.421881    6592 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.47.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0419 17:39:17.434497    6592 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 17:39:17.456304    6592 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0419 17:39:17.467349    6592 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0419 17:39:17.494426    6592 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0419 17:39:17.494426    6592 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0419 17:39:17.494426    6592 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0419 17:39:17.494426    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0419 17:39:17.494972    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0419 17:39:17.510020    6592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0419 17:39:17.513542    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 17:39:17.513542    6592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0419 17:39:17.523702    6592 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0419 17:39:17.523702    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0419 17:39:17.567744    6592 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0419 17:39:17.567744    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0419 17:39:17.567744    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0419 17:39:17.594374    6592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0419 17:39:17.638059    6592 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0419 17:39:17.638290    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0419 17:39:18.819259    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0419 17:39:18.901831    6592 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0419 17:39:18.934888    6592 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 17:39:18.971838    6592 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
	I0419 17:39:19.022963    6592 ssh_runner.go:195] Run: grep 172.19.47.254	control-plane.minikube.internal$ /etc/hosts
	I0419 17:39:19.029935    6592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.47.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 17:39:19.066565    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:39:19.277218    6592 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 17:39:19.309649    6592 host.go:66] Checking if "ha-095800" exists ...
	I0419 17:39:19.310691    6592 start.go:316] joinCluster: &{Name:ha-095800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:default APIServerHAVIP:172.
19.47.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.32.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.39.106 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.19.47.152 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisi
oner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 17:39:19.310917    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0419 17:39:19.310976    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:39:21.350045    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:39:21.350045    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:39:21.350045    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:39:23.917285    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:39:23.917285    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:39:23.917680    6592 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:39:24.132257    6592 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.8212639s)
	I0419 17:39:24.132314    6592 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.19.47.152 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 17:39:24.132440    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cfxogg.84yr6zh5qlpcbk7r --discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-095800-m03 --control-plane --apiserver-advertise-address=172.19.47.152 --apiserver-bind-port=8443"
	I0419 17:40:09.001560    6592 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cfxogg.84yr6zh5qlpcbk7r --discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-095800-m03 --control-plane --apiserver-advertise-address=172.19.47.152 --apiserver-bind-port=8443": (44.8689706s)
	I0419 17:40:09.001670    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0419 17:40:09.939583    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-095800-m03 minikube.k8s.io/updated_at=2024_04_19T17_40_09_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=ha-095800 minikube.k8s.io/primary=false
	I0419 17:40:10.112579    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-095800-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0419 17:40:10.304761    6592 start.go:318] duration metric: took 50.9939474s to joinCluster
	I0419 17:40:10.304761    6592 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.19.47.152 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 17:40:10.307995    6592 out.go:177] * Verifying Kubernetes components...
	I0419 17:40:10.305764    6592 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:40:10.323961    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:40:10.656939    6592 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 17:40:10.695868    6592 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 17:40:10.696716    6592 kapi.go:59] client config for ha-095800: &rest.Config{Host:"https://172.19.47.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-095800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-095800\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c35620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0419 17:40:10.696843    6592 kubeadm.go:477] Overriding stale ClientConfig host https://172.19.47.254:8443 with https://172.19.32.218:8443
	I0419 17:40:10.697071    6592 node_ready.go:35] waiting up to 6m0s for node "ha-095800-m03" to be "Ready" ...
	I0419 17:40:10.697712    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:10.697712    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:10.697769    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:10.697769    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:10.713146    6592 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0419 17:40:11.213892    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:11.213892    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:11.213892    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:11.213892    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:11.220099    6592 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:40:11.698788    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:11.698788    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:11.699205    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:11.699205    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:11.702127    6592 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:40:12.208561    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:12.208632    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:12.208632    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:12.208632    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:12.213630    6592 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:40:12.710875    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:12.710918    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:12.710956    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:12.710956    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:12.714370    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:40:12.717006    6592 node_ready.go:53] node "ha-095800-m03" has status "Ready":"False"
	I0419 17:40:13.205233    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:13.205368    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:13.205368    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:13.205368    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:13.211711    6592 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:40:13.711288    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:13.711345    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:13.711345    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:13.711345    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:13.711715    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:14.212761    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:14.212831    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:14.212882    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:14.212882    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:14.218576    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:40:14.713172    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:14.713345    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:14.713345    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:14.713345    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:14.715843    6592 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:40:14.718765    6592 node_ready.go:53] node "ha-095800-m03" has status "Ready":"False"
	I0419 17:40:15.212460    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:15.212673    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:15.212673    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:15.212673    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:15.216665    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:40:15.702162    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:15.702162    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:15.702162    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:15.702162    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:15.702856    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:16.208060    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:16.208060    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:16.208060    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:16.208060    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:16.211086    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:40:16.702963    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:16.703060    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:16.703060    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:16.703060    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:17.097940    6592 round_trippers.go:574] Response Status: 200 OK in 394 milliseconds
	I0419 17:40:17.098803    6592 node_ready.go:53] node "ha-095800-m03" has status "Ready":"False"
	I0419 17:40:17.216857    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:17.216939    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:17.216939    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:17.216988    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:17.225131    6592 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 17:40:17.704708    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:17.704708    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:17.704708    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:17.704865    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:17.705283    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:18.202942    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:18.203165    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:18.203165    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:18.203165    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:18.208674    6592 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:40:18.702647    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:18.702647    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:18.702729    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:18.702729    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:18.707053    6592 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:40:19.211521    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:19.211822    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.211822    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.211822    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.216754    6592 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:40:19.217679    6592 node_ready.go:49] node "ha-095800-m03" has status "Ready":"True"
	I0419 17:40:19.217679    6592 node_ready.go:38] duration metric: took 8.5205877s for node "ha-095800-m03" to be "Ready" ...
	I0419 17:40:19.217762    6592 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 17:40:19.217851    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods
	I0419 17:40:19.217851    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.217851    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.217851    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.227736    6592 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0419 17:40:19.240257    6592 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7mk28" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.240475    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7mk28
	I0419 17:40:19.240504    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.240504    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.240504    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.241090    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:19.246642    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:19.246701    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.246749    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.246749    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.250264    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:40:19.251590    6592 pod_ready.go:92] pod "coredns-7db6d8ff4d-7mk28" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:19.251590    6592 pod_ready.go:81] duration metric: took 11.242ms for pod "coredns-7db6d8ff4d-7mk28" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.251590    6592 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vklb9" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.251590    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vklb9
	I0419 17:40:19.251590    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.251590    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.251590    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.252978    6592 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:40:19.256787    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:19.256787    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.256787    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.256787    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.260956    6592 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:40:19.261078    6592 pod_ready.go:92] pod "coredns-7db6d8ff4d-vklb9" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:19.262089    6592 pod_ready.go:81] duration metric: took 10.4983ms for pod "coredns-7db6d8ff4d-vklb9" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.262089    6592 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.262089    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-095800
	I0419 17:40:19.262089    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.262089    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.262089    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.269434    6592 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 17:40:19.270652    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:19.270652    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.270652    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.270652    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.275098    6592 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:40:19.275322    6592 pod_ready.go:92] pod "etcd-ha-095800" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:19.275909    6592 pod_ready.go:81] duration metric: took 13.8208ms for pod "etcd-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.275909    6592 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.275909    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-095800-m02
	I0419 17:40:19.275909    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.275909    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.275909    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.280885    6592 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:40:19.281748    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:40:19.281937    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.281937    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.281937    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.285680    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:40:19.287587    6592 pod_ready.go:92] pod "etcd-ha-095800-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:19.287620    6592 pod_ready.go:81] duration metric: took 11.7103ms for pod "etcd-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.287620    6592 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-095800-m03" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.413731    6592 request.go:629] Waited for 125.937ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-095800-m03
	I0419 17:40:19.413967    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-095800-m03
	I0419 17:40:19.414042    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.414068    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.414068    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.417711    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:40:19.619930    6592 request.go:629] Waited for 198.0891ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:19.620022    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:19.620022    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.620022    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.620022    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.621830    6592 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:40:19.621830    6592 pod_ready.go:92] pod "etcd-ha-095800-m03" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:19.621830    6592 pod_ready.go:81] duration metric: took 334.2095ms for pod "etcd-ha-095800-m03" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.621830    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.820222    6592 request.go:629] Waited for 198.3916ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800
	I0419 17:40:19.820367    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800
	I0419 17:40:19.820367    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.820367    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.820367    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.821048    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:20.015370    6592 request.go:629] Waited for 187.9931ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:20.015658    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:20.015658    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:20.015695    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:20.015695    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:20.016065    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:20.021653    6592 pod_ready.go:92] pod "kube-apiserver-ha-095800" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:20.021653    6592 pod_ready.go:81] duration metric: took 399.8221ms for pod "kube-apiserver-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:20.021734    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:20.238826    6592 request.go:629] Waited for 216.8856ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800-m02
	I0419 17:40:20.238826    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800-m02
	I0419 17:40:20.238826    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:20.238826    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:20.238826    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:20.246211    6592 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 17:40:20.425566    6592 request.go:629] Waited for 177.8182ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:40:20.425747    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:40:20.425747    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:20.425747    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:20.425747    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:20.426530    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:20.432172    6592 pod_ready.go:92] pod "kube-apiserver-ha-095800-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:20.432236    6592 pod_ready.go:81] duration metric: took 410.501ms for pod "kube-apiserver-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:20.432312    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-095800-m03" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:20.618720    6592 request.go:629] Waited for 186.0776ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800-m03
	I0419 17:40:20.619042    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800-m03
	I0419 17:40:20.619137    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:20.619137    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:20.619137    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:20.619888    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:20.812375    6592 request.go:629] Waited for 186.6694ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:20.812555    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:20.812555    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:20.812676    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:20.812676    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:20.813332    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:21.025836    6592 request.go:629] Waited for 69.949ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800-m03
	I0419 17:40:21.026187    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800-m03
	I0419 17:40:21.026187    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:21.026187    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:21.026187    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:21.032496    6592 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:40:21.226179    6592 request.go:629] Waited for 192.7513ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:21.226179    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:21.226179    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:21.226179    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:21.226179    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:21.226720    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:21.448697    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800-m03
	I0419 17:40:21.448697    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:21.448697    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:21.448697    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:21.449253    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:21.629236    6592 request.go:629] Waited for 170.9525ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:21.629380    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:21.629423    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:21.629463    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:21.629463    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:21.635409    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:40:21.944000    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800-m03
	I0419 17:40:21.944090    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:21.944090    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:21.944090    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:21.944331    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:22.019189    6592 request.go:629] Waited for 74.7797ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:22.019284    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:22.019284    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:22.019284    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:22.019284    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:22.019555    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:22.025096    6592 pod_ready.go:92] pod "kube-apiserver-ha-095800-m03" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:22.025096    6592 pod_ready.go:81] duration metric: took 1.5927794s for pod "kube-apiserver-ha-095800-m03" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:22.025096    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:22.218615    6592 request.go:629] Waited for 193.1446ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095800
	I0419 17:40:22.218877    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095800
	I0419 17:40:22.218910    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:22.218910    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:22.218962    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:22.219273    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:22.414366    6592 request.go:629] Waited for 189.6078ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:22.414688    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:22.414763    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:22.414763    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:22.414763    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:22.415521    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:22.422357    6592 pod_ready.go:92] pod "kube-controller-manager-ha-095800" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:22.422357    6592 pod_ready.go:81] duration metric: took 397.2607ms for pod "kube-controller-manager-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:22.422490    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:22.620926    6592 request.go:629] Waited for 198.0742ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095800-m02
	I0419 17:40:22.621022    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095800-m02
	I0419 17:40:22.621022    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:22.621152    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:22.621152    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:22.621996    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:22.822253    6592 request.go:629] Waited for 192.6322ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:40:22.822253    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:40:22.822253    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:22.822253    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:22.822547    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:22.823190    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:22.828672    6592 pod_ready.go:92] pod "kube-controller-manager-ha-095800-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:22.828750    6592 pod_ready.go:81] duration metric: took 406.2586ms for pod "kube-controller-manager-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:22.828750    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-095800-m03" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:23.026425    6592 request.go:629] Waited for 197.4078ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095800-m03
	I0419 17:40:23.026516    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095800-m03
	I0419 17:40:23.026516    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:23.026516    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:23.026516    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:23.026915    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:23.223351    6592 request.go:629] Waited for 189.653ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:23.223519    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:23.223642    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:23.223681    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:23.223723    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:23.227877    6592 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:40:23.229030    6592 pod_ready.go:92] pod "kube-controller-manager-ha-095800-m03" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:23.229030    6592 pod_ready.go:81] duration metric: took 400.2798ms for pod "kube-controller-manager-ha-095800-m03" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:23.229206    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4nldk" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:23.425877    6592 request.go:629] Waited for 196.5988ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4nldk
	I0419 17:40:23.426148    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4nldk
	I0419 17:40:23.426209    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:23.426209    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:23.426209    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:23.437207    6592 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0419 17:40:23.616332    6592 request.go:629] Waited for 178.3307ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:40:23.616649    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:40:23.616710    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:23.616772    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:23.616772    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:23.619753    6592 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:40:23.623566    6592 pod_ready.go:92] pod "kube-proxy-4nldk" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:23.623566    6592 pod_ready.go:81] duration metric: took 394.3594ms for pod "kube-proxy-4nldk" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:23.623566    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5dp8h" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:23.821177    6592 request.go:629] Waited for 196.706ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5dp8h
	I0419 17:40:23.821206    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5dp8h
	I0419 17:40:23.821206    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:23.821206    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:23.821206    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:23.826556    6592 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:40:24.020696    6592 request.go:629] Waited for 193.0611ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:24.020883    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:24.020883    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:24.020883    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:24.020883    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:24.034179    6592 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0419 17:40:24.034462    6592 pod_ready.go:92] pod "kube-proxy-5dp8h" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:24.034462    6592 pod_ready.go:81] duration metric: took 410.8949ms for pod "kube-proxy-5dp8h" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:24.035004    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vq826" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:24.227241    6592 request.go:629] Waited for 192.1818ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vq826
	I0419 17:40:24.227241    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vq826
	I0419 17:40:24.227241    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:24.227241    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:24.227241    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:24.239234    6592 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0419 17:40:24.424820    6592 request.go:629] Waited for 184.277ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:24.425093    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:24.425157    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:24.425157    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:24.425157    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:24.433275    6592 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 17:40:24.434400    6592 pod_ready.go:92] pod "kube-proxy-vq826" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:24.434400    6592 pod_ready.go:81] duration metric: took 399.3954ms for pod "kube-proxy-vq826" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:24.434400    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:24.627095    6592 request.go:629] Waited for 192.6938ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095800
	I0419 17:40:24.627095    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095800
	I0419 17:40:24.627095    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:24.627095    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:24.627095    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:24.627552    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:24.827219    6592 request.go:629] Waited for 195.1688ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:24.827296    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:24.827406    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:24.827406    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:24.827406    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:24.828035    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:24.834608    6592 pod_ready.go:92] pod "kube-scheduler-ha-095800" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:24.835160    6592 pod_ready.go:81] duration metric: took 400.7587ms for pod "kube-scheduler-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:24.835160    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:25.015561    6592 request.go:629] Waited for 180.2485ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095800-m02
	I0419 17:40:25.015793    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095800-m02
	I0419 17:40:25.015847    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:25.015894    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:25.015894    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:25.021165    6592 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:40:25.216966    6592 request.go:629] Waited for 194.3002ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:40:25.217221    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:40:25.217284    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:25.217323    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:25.217338    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:25.218007    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:25.222484    6592 pod_ready.go:92] pod "kube-scheduler-ha-095800-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:25.222484    6592 pod_ready.go:81] duration metric: took 387.3235ms for pod "kube-scheduler-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:25.222484    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-095800-m03" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:25.426512    6592 request.go:629] Waited for 203.4197ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095800-m03
	I0419 17:40:25.426512    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095800-m03
	I0419 17:40:25.426512    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:25.426512    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:25.426512    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:25.427029    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:25.616024    6592 request.go:629] Waited for 182.6037ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:25.616287    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:25.616287    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:25.616323    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:25.616323    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:25.622967    6592 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:40:25.623672    6592 pod_ready.go:92] pod "kube-scheduler-ha-095800-m03" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:25.623672    6592 pod_ready.go:81] duration metric: took 401.1865ms for pod "kube-scheduler-ha-095800-m03" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:25.624207    6592 pod_ready.go:38] duration metric: took 6.4064296s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 17:40:25.624207    6592 api_server.go:52] waiting for apiserver process to appear ...
	I0419 17:40:25.638890    6592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 17:40:25.667271    6592 api_server.go:72] duration metric: took 15.3624731s to wait for apiserver process to appear ...
	I0419 17:40:25.667308    6592 api_server.go:88] waiting for apiserver healthz status ...
	I0419 17:40:25.667308    6592 api_server.go:253] Checking apiserver healthz at https://172.19.32.218:8443/healthz ...
	I0419 17:40:25.675063    6592 api_server.go:279] https://172.19.32.218:8443/healthz returned 200:
	ok
	I0419 17:40:25.676474    6592 round_trippers.go:463] GET https://172.19.32.218:8443/version
	I0419 17:40:25.676544    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:25.676544    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:25.676544    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:25.676795    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:25.676795    6592 api_server.go:141] control plane version: v1.30.0
	I0419 17:40:25.676795    6592 api_server.go:131] duration metric: took 9.4866ms to wait for apiserver health ...
	I0419 17:40:25.676795    6592 system_pods.go:43] waiting for kube-system pods to appear ...
	I0419 17:40:25.819508    6592 request.go:629] Waited for 142.7128ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods
	I0419 17:40:25.819979    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods
	I0419 17:40:25.819979    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:25.819979    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:25.820089    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:25.831669    6592 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0419 17:40:25.842496    6592 system_pods.go:59] 24 kube-system pods found
	I0419 17:40:25.842496    6592 system_pods.go:61] "coredns-7db6d8ff4d-7mk28" [e9d98fbb-21cc-4618-9709-0b27986c63b1] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "coredns-7db6d8ff4d-vklb9" [a1f46798-9bf9-4abe-9d6d-573902a0d373] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "etcd-ha-095800" [1aaf32fa-58bb-40f3-a162-21259eb4f376] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "etcd-ha-095800-m02" [5b0fc0be-2f86-4758-b8eb-aeb31245afd7] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "etcd-ha-095800-m03" [8532b3ac-29de-4ca5-bfc9-68af08e21e6c] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "kindnet-76q26" [a98d461e-7b24-43a6-b11b-4875d803e532] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "kindnet-7j4cr" [92ce62b8-71b2-4deb-b295-cf938509a4e5] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "kindnet-kpn69" [49ffd8bc-d455-4f64-9822-e2d363df7cc7] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "kube-apiserver-ha-095800" [ebaad661-6759-415e-b65f-14d6ffb46853] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "kube-apiserver-ha-095800-m02" [99267604-9885-472a-aab9-eda6b150457d] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "kube-apiserver-ha-095800-m03" [4085bd90-5449-4c48-9d26-f2ff9c364b8b] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "kube-controller-manager-ha-095800" [dc9b9d64-b78b-44e3-a7f6-26ba6007b6dc] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "kube-controller-manager-ha-095800-m02" [534ea924-2ff9-48ec-a02c-ce23e4c47324] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "kube-controller-manager-ha-095800-m03" [f94ddaec-87d7-41f1-88f5-ec9ef37eb9a5] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "kube-proxy-4nldk" [79c714ec-b6ec-4cff-86fb-f560bed67202] Running
	I0419 17:40:25.843031    6592 system_pods.go:61] "kube-proxy-5dp8h" [4a95a0be-301a-482f-a714-3f918af5832c] Running
	I0419 17:40:25.843031    6592 system_pods.go:61] "kube-proxy-vq826" [d2b22474-6974-4cbd-8565-95facc3c817e] Running
	I0419 17:40:25.843031    6592 system_pods.go:61] "kube-scheduler-ha-095800" [af0f5d53-c6ab-4235-b9a2-ce0a371ff55f] Running
	I0419 17:40:25.843031    6592 system_pods.go:61] "kube-scheduler-ha-095800-m02" [000d5f12-1c3f-41ba-b0dd-696da8c6b8ad] Running
	I0419 17:40:25.843031    6592 system_pods.go:61] "kube-scheduler-ha-095800-m03" [c9432782-9134-4e45-b8c4-8585290ca2fc] Running
	I0419 17:40:25.843031    6592 system_pods.go:61] "kube-vip-ha-095800" [2fe74317-1ff4-4147-ae17-f2f31f4f06ba] Running
	I0419 17:40:25.843031    6592 system_pods.go:61] "kube-vip-ha-095800-m02" [e80eec5a-c346-4f90-a843-b6ed2d111f0b] Running
	I0419 17:40:25.843031    6592 system_pods.go:61] "kube-vip-ha-095800-m03" [5da00673-3a8b-41ac-8b5a-ec217012aeee] Running
	I0419 17:40:25.843031    6592 system_pods.go:61] "storage-provisioner" [f58269e6-1ef1-442a-972b-cc05662b174c] Running
	I0419 17:40:25.843031    6592 system_pods.go:74] duration metric: took 166.2358ms to wait for pod list to return data ...
	I0419 17:40:25.843031    6592 default_sa.go:34] waiting for default service account to be created ...
	I0419 17:40:26.019554    6592 request.go:629] Waited for 176.0728ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/default/serviceaccounts
	I0419 17:40:26.019554    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/default/serviceaccounts
	I0419 17:40:26.019554    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:26.019554    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:26.019554    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:26.020328    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:26.024992    6592 default_sa.go:45] found service account: "default"
	I0419 17:40:26.025061    6592 default_sa.go:55] duration metric: took 182.0303ms for default service account to be created ...
	I0419 17:40:26.025061    6592 system_pods.go:116] waiting for k8s-apps to be running ...
	I0419 17:40:26.214983    6592 request.go:629] Waited for 189.3667ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods
	I0419 17:40:26.215151    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods
	I0419 17:40:26.215185    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:26.215185    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:26.215185    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:26.217978    6592 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:40:26.236655    6592 system_pods.go:86] 24 kube-system pods found
	I0419 17:40:26.236718    6592 system_pods.go:89] "coredns-7db6d8ff4d-7mk28" [e9d98fbb-21cc-4618-9709-0b27986c63b1] Running
	I0419 17:40:26.236718    6592 system_pods.go:89] "coredns-7db6d8ff4d-vklb9" [a1f46798-9bf9-4abe-9d6d-573902a0d373] Running
	I0419 17:40:26.236718    6592 system_pods.go:89] "etcd-ha-095800" [1aaf32fa-58bb-40f3-a162-21259eb4f376] Running
	I0419 17:40:26.236783    6592 system_pods.go:89] "etcd-ha-095800-m02" [5b0fc0be-2f86-4758-b8eb-aeb31245afd7] Running
	I0419 17:40:26.236783    6592 system_pods.go:89] "etcd-ha-095800-m03" [8532b3ac-29de-4ca5-bfc9-68af08e21e6c] Running
	I0419 17:40:26.236783    6592 system_pods.go:89] "kindnet-76q26" [a98d461e-7b24-43a6-b11b-4875d803e532] Running
	I0419 17:40:26.236835    6592 system_pods.go:89] "kindnet-7j4cr" [92ce62b8-71b2-4deb-b295-cf938509a4e5] Running
	I0419 17:40:26.236853    6592 system_pods.go:89] "kindnet-kpn69" [49ffd8bc-d455-4f64-9822-e2d363df7cc7] Running
	I0419 17:40:26.236853    6592 system_pods.go:89] "kube-apiserver-ha-095800" [ebaad661-6759-415e-b65f-14d6ffb46853] Running
	I0419 17:40:26.236853    6592 system_pods.go:89] "kube-apiserver-ha-095800-m02" [99267604-9885-472a-aab9-eda6b150457d] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-apiserver-ha-095800-m03" [4085bd90-5449-4c48-9d26-f2ff9c364b8b] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-controller-manager-ha-095800" [dc9b9d64-b78b-44e3-a7f6-26ba6007b6dc] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-controller-manager-ha-095800-m02" [534ea924-2ff9-48ec-a02c-ce23e4c47324] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-controller-manager-ha-095800-m03" [f94ddaec-87d7-41f1-88f5-ec9ef37eb9a5] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-proxy-4nldk" [79c714ec-b6ec-4cff-86fb-f560bed67202] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-proxy-5dp8h" [4a95a0be-301a-482f-a714-3f918af5832c] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-proxy-vq826" [d2b22474-6974-4cbd-8565-95facc3c817e] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-scheduler-ha-095800" [af0f5d53-c6ab-4235-b9a2-ce0a371ff55f] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-scheduler-ha-095800-m02" [000d5f12-1c3f-41ba-b0dd-696da8c6b8ad] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-scheduler-ha-095800-m03" [c9432782-9134-4e45-b8c4-8585290ca2fc] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-vip-ha-095800" [2fe74317-1ff4-4147-ae17-f2f31f4f06ba] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-vip-ha-095800-m02" [e80eec5a-c346-4f90-a843-b6ed2d111f0b] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-vip-ha-095800-m03" [5da00673-3a8b-41ac-8b5a-ec217012aeee] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "storage-provisioner" [f58269e6-1ef1-442a-972b-cc05662b174c] Running
	I0419 17:40:26.236907    6592 system_pods.go:126] duration metric: took 211.8447ms to wait for k8s-apps to be running ...
	I0419 17:40:26.236907    6592 system_svc.go:44] waiting for kubelet service to be running ....
	I0419 17:40:26.246212    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 17:40:26.287178    6592 system_svc.go:56] duration metric: took 50.2154ms WaitForService to wait for kubelet
	I0419 17:40:26.287240    6592 kubeadm.go:576] duration metric: took 15.9824407s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 17:40:26.287313    6592 node_conditions.go:102] verifying NodePressure condition ...
	I0419 17:40:26.423817    6592 request.go:629] Waited for 136.3699ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes
	I0419 17:40:26.423817    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes
	I0419 17:40:26.423817    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:26.423817    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:26.423817    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:26.424469    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:26.431194    6592 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 17:40:26.431194    6592 node_conditions.go:123] node cpu capacity is 2
	I0419 17:40:26.431194    6592 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 17:40:26.431194    6592 node_conditions.go:123] node cpu capacity is 2
	I0419 17:40:26.431194    6592 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 17:40:26.431194    6592 node_conditions.go:123] node cpu capacity is 2
	I0419 17:40:26.431194    6592 node_conditions.go:105] duration metric: took 143.8802ms to run NodePressure ...
	I0419 17:40:26.431194    6592 start.go:240] waiting for startup goroutines ...
	I0419 17:40:26.431798    6592 start.go:254] writing updated cluster config ...
	I0419 17:40:26.445515    6592 ssh_runner.go:195] Run: rm -f paused
	I0419 17:40:26.594780    6592 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0419 17:40:26.598128    6592 out.go:177] * Done! kubectl is now configured to use "ha-095800" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 20 00:32:58 ha-095800 cri-dockerd[1229]: time="2024-04-20T00:32:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/47bf1e62b695ad34069452341aed33d4f1834b56e6650e19b97c79196c398976/resolv.conf as [nameserver 172.19.32.1]"
	Apr 20 00:32:58 ha-095800 cri-dockerd[1229]: time="2024-04-20T00:32:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/457723d9f67a47a801bc203e41fd7d1220640b53afc4a58312931715bb50c367/resolv.conf as [nameserver 172.19.32.1]"
	Apr 20 00:32:58 ha-095800 cri-dockerd[1229]: time="2024-04-20T00:32:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/40281e245fac98dda8e7823d4d2188bb99bc4d1fa819c859b83fe45d7ff725e7/resolv.conf as [nameserver 172.19.32.1]"
	Apr 20 00:32:58 ha-095800 dockerd[1325]: time="2024-04-20T00:32:58.721909990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 20 00:32:58 ha-095800 dockerd[1325]: time="2024-04-20T00:32:58.722467406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 20 00:32:58 ha-095800 dockerd[1325]: time="2024-04-20T00:32:58.722636811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:32:58 ha-095800 dockerd[1325]: time="2024-04-20T00:32:58.723071423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:32:58 ha-095800 dockerd[1325]: time="2024-04-20T00:32:58.902173598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 20 00:32:58 ha-095800 dockerd[1325]: time="2024-04-20T00:32:58.902488807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 20 00:32:58 ha-095800 dockerd[1325]: time="2024-04-20T00:32:58.902508307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:32:58 ha-095800 dockerd[1325]: time="2024-04-20T00:32:58.902684412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:32:58 ha-095800 dockerd[1325]: time="2024-04-20T00:32:58.953038911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 20 00:32:58 ha-095800 dockerd[1325]: time="2024-04-20T00:32:58.953120813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 20 00:32:58 ha-095800 dockerd[1325]: time="2024-04-20T00:32:58.953140214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:32:58 ha-095800 dockerd[1325]: time="2024-04-20T00:32:58.953366220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:41:03 ha-095800 dockerd[1325]: time="2024-04-20T00:41:03.394072984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 20 00:41:03 ha-095800 dockerd[1325]: time="2024-04-20T00:41:03.394248080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 20 00:41:03 ha-095800 dockerd[1325]: time="2024-04-20T00:41:03.394266280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:41:03 ha-095800 dockerd[1325]: time="2024-04-20T00:41:03.395362555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:41:03 ha-095800 cri-dockerd[1229]: time="2024-04-20T00:41:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/534cd974048a518352c11c7b4010b28e8e1f400ad1f4f9b6c123ccf10f57bcdb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 20 00:41:04 ha-095800 cri-dockerd[1229]: time="2024-04-20T00:41:04Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 20 00:41:05 ha-095800 dockerd[1325]: time="2024-04-20T00:41:05.014542285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 20 00:41:05 ha-095800 dockerd[1325]: time="2024-04-20T00:41:05.014826081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 20 00:41:05 ha-095800 dockerd[1325]: time="2024-04-20T00:41:05.014944979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:41:05 ha-095800 dockerd[1325]: time="2024-04-20T00:41:05.015313174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2e2ed01949e55       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   534cd974048a5       busybox-fc5497c4f-l275w
	c1612d89b19bd       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   40281e245fac9       coredns-7db6d8ff4d-7mk28
	37bb284139899       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   457723d9f67a4       coredns-7db6d8ff4d-vklb9
	4ddb9435774ce       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   47bf1e62b695a       storage-provisioner
	abcfe6bf3c3f8       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              9 minutes ago        Running             kindnet-cni               0                   aae48a51c7222       kindnet-kpn69
	b7a65c81f5f41       a0bf559e280cf                                                                                         9 minutes ago        Running             kube-proxy                0                   9271277bf64ed       kube-proxy-vq826
	6aa83e6a42148       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     9 minutes ago        Running             kube-vip                  0                   dd653687d8d91       kube-vip-ha-095800
	fd73a674b215d       c7aad43836fa5                                                                                         9 minutes ago        Running             kube-controller-manager   0                   8aeedfc48a54a       kube-controller-manager-ha-095800
	10fc813931a16       3861cfcd7c04c                                                                                         9 minutes ago        Running             etcd                      0                   70e54776183a8       etcd-ha-095800
	5b3201e921978       259c8277fcbbc                                                                                         9 minutes ago        Running             kube-scheduler            0                   c1ca7767dd253       kube-scheduler-ha-095800
	9ddfae1ff47d9       c42f13656d0b2                                                                                         9 minutes ago        Running             kube-apiserver            0                   33a33a7a208eb       kube-apiserver-ha-095800
	
	
	==> coredns [37bb28413989] <==
	[INFO] 10.244.0.4:49148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000240397s
	[INFO] 10.244.0.4:34643 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000105598s
	[INFO] 10.244.0.4:36915 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000274996s
	[INFO] 10.244.0.4:41152 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103699s
	[INFO] 10.244.0.4:59052 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000262697s
	[INFO] 10.244.0.4:56196 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073899s
	[INFO] 10.244.2.2:53328 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139098s
	[INFO] 10.244.2.2:38072 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000062099s
	[INFO] 10.244.2.2:58488 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000088899s
	[INFO] 10.244.2.2:55087 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000068099s
	[INFO] 10.244.1.2:45805 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000268096s
	[INFO] 10.244.1.2:55492 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078199s
	[INFO] 10.244.0.4:58168 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000351595s
	[INFO] 10.244.0.4:41098 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060199s
	[INFO] 10.244.2.2:51023 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115198s
	[INFO] 10.244.2.2:49126 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068699s
	[INFO] 10.244.1.2:43231 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000207897s
	[INFO] 10.244.1.2:44051 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103999s
	[INFO] 10.244.0.4:38322 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150597s
	[INFO] 10.244.0.4:35307 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149798s
	[INFO] 10.244.0.4:47169 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080399s
	[INFO] 10.244.2.2:39439 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151698s
	[INFO] 10.244.2.2:39046 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154498s
	[INFO] 10.244.2.2:55199 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060499s
	[INFO] 10.244.2.2:47027 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000122398s
	
	
	==> coredns [c1612d89b19b] <==
	[INFO] 10.244.1.2:37673 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.22739666s
	[INFO] 10.244.1.2:57934 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.019259825s
	[INFO] 10.244.1.2:47705 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.113470124s
	[INFO] 10.244.0.4:43777 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144398s
	[INFO] 10.244.0.4:34954 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000177197s
	[INFO] 10.244.0.4:37850 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000124799s
	[INFO] 10.244.2.2:44073 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000105899s
	[INFO] 10.244.1.2:41954 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000260396s
	[INFO] 10.244.1.2:33550 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000195998s
	[INFO] 10.244.1.2:54754 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134398s
	[INFO] 10.244.0.4:55985 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000198097s
	[INFO] 10.244.0.4:41839 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012621719s
	[INFO] 10.244.2.2:38470 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000231397s
	[INFO] 10.244.2.2:53036 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000175798s
	[INFO] 10.244.2.2:59372 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064699s
	[INFO] 10.244.2.2:40909 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163398s
	[INFO] 10.244.1.2:42257 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105599s
	[INFO] 10.244.1.2:57777 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086398s
	[INFO] 10.244.0.4:37976 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000166798s
	[INFO] 10.244.0.4:44012 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151998s
	[INFO] 10.244.2.2:35745 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076099s
	[INFO] 10.244.2.2:52538 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073999s
	[INFO] 10.244.1.2:42825 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179197s
	[INFO] 10.244.1.2:45424 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000136098s
	[INFO] 10.244.0.4:34015 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000138898s
	
	
	==> describe nodes <==
	Name:               ha-095800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-095800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-095800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_19T17_32_35_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:32:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-095800
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:42:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:41:35 +0000   Sat, 20 Apr 2024 00:32:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:41:35 +0000   Sat, 20 Apr 2024 00:32:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:41:35 +0000   Sat, 20 Apr 2024 00:32:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:41:35 +0000   Sat, 20 Apr 2024 00:32:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.32.218
	  Hostname:    ha-095800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b35e1ffdd6ce4e3ea019e383acec8f36
	  System UUID:                151afd6c-ea6d-2a4e-971e-0fd2cbdb7589
	  Boot ID:                    e2e9e6fa-ec8c-4a9a-8bee-e4bf0e45825d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-l275w              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 coredns-7db6d8ff4d-7mk28             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m19s
	  kube-system                 coredns-7db6d8ff4d-vklb9             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m19s
	  kube-system                 etcd-ha-095800                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m32s
	  kube-system                 kindnet-kpn69                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m19s
	  kube-system                 kube-apiserver-ha-095800             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	  kube-system                 kube-controller-manager-ha-095800    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	  kube-system                 kube-proxy-vq826                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-scheduler-ha-095800             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	  kube-system                 kube-vip-ha-095800                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m16s  kube-proxy       
	  Normal  Starting                 9m32s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m32s  kubelet          Node ha-095800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m32s  kubelet          Node ha-095800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m32s  kubelet          Node ha-095800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m32s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m19s  node-controller  Node ha-095800 event: Registered Node ha-095800 in Controller
	  Normal  NodeReady                9m9s   kubelet          Node ha-095800 status is now: NodeReady
	  Normal  RegisteredNode           5m25s  node-controller  Node ha-095800 event: Registered Node ha-095800 in Controller
	  Normal  RegisteredNode           102s   node-controller  Node ha-095800 event: Registered Node ha-095800 in Controller
	
	
	Name:               ha-095800-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-095800-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-095800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T17_36_26_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:36:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-095800-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:41:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:41:25 +0000   Sat, 20 Apr 2024 00:36:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:41:25 +0000   Sat, 20 Apr 2024 00:36:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:41:25 +0000   Sat, 20 Apr 2024 00:36:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:41:25 +0000   Sat, 20 Apr 2024 00:36:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.39.106
	  Hostname:    ha-095800-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 f647bbaeeda1463daba8367e17d89c0f
	  System UUID:                11ceb28a-344d-0d49-b8d6-41acde2b853d
	  Boot ID:                    0defed78-62f0-48e9-97c3-1c117ea2506d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dxkjp                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 etcd-ha-095800-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m44s
	  kube-system                 kindnet-7j4cr                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m47s
	  kube-system                 kube-apiserver-ha-095800-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m44s
	  kube-system                 kube-controller-manager-ha-095800-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m44s
	  kube-system                 kube-proxy-4nldk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m47s
	  kube-system                 kube-scheduler-ha-095800-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m44s
	  kube-system                 kube-vip-ha-095800-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m41s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m47s (x8 over 5m47s)  kubelet          Node ha-095800-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m47s (x8 over 5m47s)  kubelet          Node ha-095800-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m47s (x7 over 5m47s)  kubelet          Node ha-095800-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m44s                  node-controller  Node ha-095800-m02 event: Registered Node ha-095800-m02 in Controller
	  Normal  RegisteredNode           5m25s                  node-controller  Node ha-095800-m02 event: Registered Node ha-095800-m02 in Controller
	  Normal  RegisteredNode           102s                   node-controller  Node ha-095800-m02 event: Registered Node ha-095800-m02 in Controller
	
	
	Name:               ha-095800-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-095800-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-095800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T17_40_09_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:40:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-095800-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:42:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:41:35 +0000   Sat, 20 Apr 2024 00:40:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:41:35 +0000   Sat, 20 Apr 2024 00:40:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:41:35 +0000   Sat, 20 Apr 2024 00:40:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:41:35 +0000   Sat, 20 Apr 2024 00:40:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.47.152
	  Hostname:    ha-095800-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 203d257fe0074ce3b8accd939db5e46a
	  System UUID:                064d6d88-fb2e-6249-b24d-461c3c2fcda0
	  Boot ID:                    a2da1b8d-69bd-4cc7-a1b7-5a0e9e9588ec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tmxkg                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 etcd-ha-095800-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m
	  kube-system                 kindnet-76q26                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m3s
	  kube-system                 kube-apiserver-ha-095800-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-controller-manager-ha-095800-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  kube-system                 kube-proxy-5dp8h                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 kube-scheduler-ha-095800-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-vip-ha-095800-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 118s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node ha-095800-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node ha-095800-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x7 over 2m3s)  kubelet          Node ha-095800-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m                   node-controller  Node ha-095800-m03 event: Registered Node ha-095800-m03 in Controller
	  Normal  RegisteredNode           119s                 node-controller  Node ha-095800-m03 event: Registered Node ha-095800-m03 in Controller
	  Normal  RegisteredNode           102s                 node-controller  Node ha-095800-m03 event: Registered Node ha-095800-m03 in Controller
	
	
	==> dmesg <==
	[  +1.714275] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.042630] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr20 00:31] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.167423] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[ +29.792396] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.095239] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.595379] systemd-fstab-generator[985]: Ignoring "noauto" option for root device
	[  +0.200936] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.229039] systemd-fstab-generator[1011]: Ignoring "noauto" option for root device
	[Apr20 00:32] systemd-fstab-generator[1182]: Ignoring "noauto" option for root device
	[  +0.200197] systemd-fstab-generator[1194]: Ignoring "noauto" option for root device
	[  +0.185601] systemd-fstab-generator[1207]: Ignoring "noauto" option for root device
	[  +0.292167] systemd-fstab-generator[1221]: Ignoring "noauto" option for root device
	[ +11.677495] systemd-fstab-generator[1311]: Ignoring "noauto" option for root device
	[  +0.114912] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.941757] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	[  +6.709210] systemd-fstab-generator[1718]: Ignoring "noauto" option for root device
	[  +0.097038] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.205748] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.867513] systemd-fstab-generator[2209]: Ignoring "noauto" option for root device
	[ +14.771126] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.801145] kauditd_printk_skb: 29 callbacks suppressed
	[Apr20 00:36] kauditd_printk_skb: 35 callbacks suppressed
	
	
	==> etcd [10fc813931a1] <==
	{"level":"info","ts":"2024-04-20T00:40:06.235192Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b54e7fd34aca7b60","remote-peer-id":"aa9a53c8b5f39f20"}
	{"level":"info","ts":"2024-04-20T00:40:06.397885Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b54e7fd34aca7b60","to":"aa9a53c8b5f39f20","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-20T00:40:06.397951Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b54e7fd34aca7b60","remote-peer-id":"aa9a53c8b5f39f20"}
	{"level":"warn","ts":"2024-04-20T00:40:06.426066Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"aa9a53c8b5f39f20","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-04-20T00:40:06.428814Z","caller":"traceutil/trace.go:171","msg":"trace[2070950670] linearizableReadLoop","detail":"{readStateIndex:1617; appliedIndex:1617; }","duration":"156.64327ms","start":"2024-04-20T00:40:06.272156Z","end":"2024-04-20T00:40:06.428799Z","steps":["trace[2070950670] 'read index received'  (duration: 156.63767ms)","trace[2070950670] 'applied index is now lower than readState.Index'  (duration: 4.6µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-20T00:40:06.42915Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.974663ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:433"}
	{"level":"info","ts":"2024-04-20T00:40:06.429406Z","caller":"traceutil/trace.go:171","msg":"trace[1696042284] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:1450; }","duration":"157.264256ms","start":"2024-04-20T00:40:06.272129Z","end":"2024-04-20T00:40:06.429394Z","steps":["trace[1696042284] 'agreement among raft nodes before linearized reading'  (duration: 156.795166ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:40:06.472793Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b54e7fd34aca7b60","to":"aa9a53c8b5f39f20","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-20T00:40:06.472867Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b54e7fd34aca7b60","remote-peer-id":"aa9a53c8b5f39f20"}
	{"level":"warn","ts":"2024-04-20T00:40:07.276958Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"aa9a53c8b5f39f20","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-04-20T00:40:08.276775Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"aa9a53c8b5f39f20","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-04-20T00:40:08.782848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b54e7fd34aca7b60 switched to configuration voters=(3087912449143556279 12293230254372396832 13064520114517998432)"}
	{"level":"info","ts":"2024-04-20T00:40:08.783519Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"6d1bebe057f792b8","local-member-id":"b54e7fd34aca7b60"}
	{"level":"info","ts":"2024-04-20T00:40:08.783624Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"b54e7fd34aca7b60","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"aa9a53c8b5f39f20"}
	{"level":"warn","ts":"2024-04-20T00:40:16.962361Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"aa9a53c8b5f39f20","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"110.821388ms"}
	{"level":"warn","ts":"2024-04-20T00:40:16.962427Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"2ada780314b548b7","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"110.892886ms"}
	{"level":"info","ts":"2024-04-20T00:40:16.967803Z","caller":"traceutil/trace.go:171","msg":"trace[1075111396] linearizableReadLoop","detail":"{readStateIndex:1697; appliedIndex:1697; }","duration":"272.823545ms","start":"2024-04-20T00:40:16.694963Z","end":"2024-04-20T00:40:16.967787Z","steps":["trace[1075111396] 'read index received'  (duration: 272.819145ms)","trace[1075111396] 'applied index is now lower than readState.Index'  (duration: 3.1µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-20T00:40:17.083553Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.660635ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-20T00:40:17.0837Z","caller":"traceutil/trace.go:171","msg":"trace[544210169] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1522; }","duration":"246.830031ms","start":"2024-04-20T00:40:16.836855Z","end":"2024-04-20T00:40:17.083685Z","steps":["trace[544210169] 'range keys from in-memory index tree'  (duration: 246.646235ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:40:17.08398Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.296563ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8890262864178383614 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:1521 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:369 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-20T00:40:17.084112Z","caller":"traceutil/trace.go:171","msg":"trace[298864052] transaction","detail":"{read_only:false; response_revision:1523; number_of_response:1; }","duration":"427.002366ms","start":"2024-04-20T00:40:16.657093Z","end":"2024-04-20T00:40:17.084095Z","steps":["trace[298864052] 'process raft request'  (duration: 305.529206ms)","trace[298864052] 'compare'  (duration: 121.01817ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-20T00:40:17.084257Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T00:40:16.657079Z","time spent":"427.150662ms","remote":"127.0.0.1:33890","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":419,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:1521 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:369 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >"}
	{"level":"warn","ts":"2024-04-20T00:40:17.083589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"388.631031ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-095800-m03\" ","response":"range_response_count:1 size:4442"}
	{"level":"info","ts":"2024-04-20T00:40:17.085363Z","caller":"traceutil/trace.go:171","msg":"trace[1941182063] range","detail":"{range_begin:/registry/minions/ha-095800-m03; range_end:; response_count:1; response_revision:1522; }","duration":"390.44019ms","start":"2024-04-20T00:40:16.694912Z","end":"2024-04-20T00:40:17.085352Z","steps":["trace[1941182063] 'agreement among raft nodes before linearized reading'  (duration: 272.934242ms)","trace[1941182063] 'range keys from in-memory index tree'  (duration: 115.529493ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-20T00:40:17.085461Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T00:40:16.694776Z","time spent":"390.675385ms","remote":"127.0.0.1:33804","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":4465,"request content":"key:\"/registry/minions/ha-095800-m03\" "}
	
	
	==> kernel <==
	 00:42:06 up 11 min,  0 users,  load average: 0.75, 0.51, 0.28
	Linux ha-095800 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [abcfe6bf3c3f] <==
	I0420 00:41:17.935469       1 main.go:250] Node ha-095800-m03 has CIDR [10.244.2.0/24] 
	I0420 00:41:27.945088       1 main.go:223] Handling node with IPs: map[172.19.32.218:{}]
	I0420 00:41:27.945206       1 main.go:227] handling current node
	I0420 00:41:27.945222       1 main.go:223] Handling node with IPs: map[172.19.39.106:{}]
	I0420 00:41:27.945230       1 main.go:250] Node ha-095800-m02 has CIDR [10.244.1.0/24] 
	I0420 00:41:27.945745       1 main.go:223] Handling node with IPs: map[172.19.47.152:{}]
	I0420 00:41:27.945846       1 main.go:250] Node ha-095800-m03 has CIDR [10.244.2.0/24] 
	I0420 00:41:37.965590       1 main.go:223] Handling node with IPs: map[172.19.32.218:{}]
	I0420 00:41:37.965872       1 main.go:227] handling current node
	I0420 00:41:37.965983       1 main.go:223] Handling node with IPs: map[172.19.39.106:{}]
	I0420 00:41:37.966091       1 main.go:250] Node ha-095800-m02 has CIDR [10.244.1.0/24] 
	I0420 00:41:37.966464       1 main.go:223] Handling node with IPs: map[172.19.47.152:{}]
	I0420 00:41:37.966500       1 main.go:250] Node ha-095800-m03 has CIDR [10.244.2.0/24] 
	I0420 00:41:47.989982       1 main.go:223] Handling node with IPs: map[172.19.32.218:{}]
	I0420 00:41:47.990026       1 main.go:227] handling current node
	I0420 00:41:47.990040       1 main.go:223] Handling node with IPs: map[172.19.39.106:{}]
	I0420 00:41:47.990048       1 main.go:250] Node ha-095800-m02 has CIDR [10.244.1.0/24] 
	I0420 00:41:47.990564       1 main.go:223] Handling node with IPs: map[172.19.47.152:{}]
	I0420 00:41:47.990581       1 main.go:250] Node ha-095800-m03 has CIDR [10.244.2.0/24] 
	I0420 00:41:57.998343       1 main.go:223] Handling node with IPs: map[172.19.32.218:{}]
	I0420 00:41:57.998719       1 main.go:227] handling current node
	I0420 00:41:57.998901       1 main.go:223] Handling node with IPs: map[172.19.39.106:{}]
	I0420 00:41:57.999082       1 main.go:250] Node ha-095800-m02 has CIDR [10.244.1.0/24] 
	I0420 00:41:57.999402       1 main.go:223] Handling node with IPs: map[172.19.47.152:{}]
	I0420 00:41:57.999642       1 main.go:250] Node ha-095800-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [9ddfae1ff47d] <==
	I0420 00:32:34.179074       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0420 00:32:34.223511       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0420 00:32:34.264599       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0420 00:32:47.724594       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0420 00:32:47.868350       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0420 00:40:04.204262       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0420 00:40:04.204400       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0420 00:40:04.204842       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 417.19µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0420 00:40:04.206658       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0420 00:40:04.207245       1 timeout.go:142] post-timeout activity - time-elapsed: 3.301126ms, PATCH "/api/v1/namespaces/default/events/ha-095800-m03.17c7d61d026713ec" result: <nil>
	E0420 00:41:08.511459       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52043: use of closed network connection
	E0420 00:41:08.979968       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52046: use of closed network connection
	E0420 00:41:10.502886       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52048: use of closed network connection
	E0420 00:41:11.034557       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52050: use of closed network connection
	E0420 00:41:11.529945       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52052: use of closed network connection
	E0420 00:41:12.000231       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52054: use of closed network connection
	E0420 00:41:12.457499       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52056: use of closed network connection
	E0420 00:41:12.944793       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52058: use of closed network connection
	E0420 00:41:13.434584       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52060: use of closed network connection
	E0420 00:41:14.324688       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52063: use of closed network connection
	E0420 00:41:24.766093       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52065: use of closed network connection
	E0420 00:41:25.241133       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52068: use of closed network connection
	E0420 00:41:35.710060       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52070: use of closed network connection
	E0420 00:41:36.152452       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52072: use of closed network connection
	E0420 00:41:46.628651       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52074: use of closed network connection
	
	
	==> kube-controller-manager [fd73a674b215] <==
	I0420 00:32:59.812746       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="182.298µs"
	I0420 00:32:59.873471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.813773ms"
	I0420 00:32:59.875689       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="100.999µs"
	I0420 00:32:59.923084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.020444ms"
	I0420 00:32:59.923606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.799µs"
	I0420 00:33:02.113737       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0420 00:36:19.287062       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-095800-m02\" does not exist"
	I0420 00:36:19.303697       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-095800-m02" podCIDRs=["10.244.1.0/24"]
	I0420 00:36:22.152636       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-095800-m02"
	I0420 00:40:03.375549       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-095800-m03\" does not exist"
	I0420 00:40:03.407207       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-095800-m03" podCIDRs=["10.244.2.0/24"]
	I0420 00:40:07.222491       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-095800-m03"
	I0420 00:41:02.246903       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="217.288076ms"
	I0420 00:41:02.529790       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="282.799891ms"
	I0420 00:41:02.672089       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="141.963982ms"
	I0420 00:41:02.733108       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.954119ms"
	I0420 00:41:02.733364       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.698µs"
	I0420 00:41:03.359430       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.099µs"
	I0420 00:41:04.321851       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.599µs"
	I0420 00:41:05.316948       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.786519ms"
	I0420 00:41:05.317528       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.199µs"
	I0420 00:41:05.497630       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.71754ms"
	I0420 00:41:05.498011       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="253.397µs"
	I0420 00:41:05.863924       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="118.125125ms"
	I0420 00:41:05.864826       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="766.889µs"
	
	
	==> kube-proxy [b7a65c81f5f4] <==
	I0420 00:32:50.078575       1 server_linux.go:69] "Using iptables proxy"
	I0420 00:32:50.124878       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.32.218"]
	I0420 00:32:50.223572       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 00:32:50.223719       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 00:32:50.223756       1 server_linux.go:165] "Using iptables Proxier"
	I0420 00:32:50.234624       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 00:32:50.241388       1 server.go:872] "Version info" version="v1.30.0"
	I0420 00:32:50.241441       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:32:50.308364       1 config.go:101] "Starting endpoint slice config controller"
	I0420 00:32:50.310350       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 00:32:50.310451       1 config.go:192] "Starting service config controller"
	I0420 00:32:50.310477       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 00:32:50.327055       1 config.go:319] "Starting node config controller"
	I0420 00:32:50.327072       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 00:32:50.410658       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 00:32:50.410774       1 shared_informer.go:320] Caches are synced for service config
	I0420 00:32:50.428546       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5b3201e92197] <==
	W0420 00:32:32.088798       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0420 00:32:32.088846       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0420 00:32:32.106956       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0420 00:32:32.107101       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0420 00:32:32.139575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0420 00:32:32.139846       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0420 00:32:32.145405       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0420 00:32:32.145555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0420 00:32:32.153658       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0420 00:32:32.153850       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0420 00:32:32.210124       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0420 00:32:32.210513       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0420 00:32:32.246495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 00:32:32.246698       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 00:32:32.263161       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0420 00:32:32.263302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0420 00:32:32.286496       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0420 00:32:32.287007       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0420 00:32:32.383039       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 00:32:32.383395       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0420 00:32:35.082995       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0420 00:40:03.496670       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5dp8h\": pod kube-proxy-5dp8h is already assigned to node \"ha-095800-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5dp8h" node="ha-095800-m03"
	E0420 00:40:03.496769       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 4a95a0be-301a-482f-a714-3f918af5832c(kube-system/kube-proxy-5dp8h) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5dp8h"
	E0420 00:40:03.496800       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5dp8h\": pod kube-proxy-5dp8h is already assigned to node \"ha-095800-m03\"" pod="kube-system/kube-proxy-5dp8h"
	I0420 00:40:03.496847       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5dp8h" node="ha-095800-m03"
	
	
	==> kubelet <==
	Apr 20 00:39:34 ha-095800 kubelet[2216]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:39:34 ha-095800 kubelet[2216]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:40:34 ha-095800 kubelet[2216]: E0420 00:40:34.318777    2216 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:40:34 ha-095800 kubelet[2216]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:40:34 ha-095800 kubelet[2216]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:40:34 ha-095800 kubelet[2216]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:40:34 ha-095800 kubelet[2216]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:41:02 ha-095800 kubelet[2216]: I0420 00:41:02.195212    2216 topology_manager.go:215] "Topology Admit Handler" podUID="3453ba9e-03c3-4997-b521-cc7837c3b0a9" podNamespace="default" podName="busybox-fc5497c4f-l275w"
	Apr 20 00:41:02 ha-095800 kubelet[2216]: W0420 00:41:02.204323    2216 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-095800" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-095800' and this object
	Apr 20 00:41:02 ha-095800 kubelet[2216]: E0420 00:41:02.204403    2216 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-095800" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-095800' and this object
	Apr 20 00:41:02 ha-095800 kubelet[2216]: I0420 00:41:02.337494    2216 topology_manager.go:215] "Topology Admit Handler" podUID="ebbb6a5c-8671-4230-aabf-d366125f4ebb" podNamespace="default" podName="busybox-fc5497c4f-78zdr"
	Apr 20 00:41:02 ha-095800 kubelet[2216]: I0420 00:41:02.370024    2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdzrz\" (UniqueName: \"kubernetes.io/projected/3453ba9e-03c3-4997-b521-cc7837c3b0a9-kube-api-access-sdzrz\") pod \"busybox-fc5497c4f-l275w\" (UID: \"3453ba9e-03c3-4997-b521-cc7837c3b0a9\") " pod="default/busybox-fc5497c4f-l275w"
	Apr 20 00:41:02 ha-095800 kubelet[2216]: E0420 00:41:02.405757    2216 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-b7v5m], unattached volumes=[], failed to process volumes=[]: context canceled" pod="default/busybox-fc5497c4f-78zdr" podUID="ebbb6a5c-8671-4230-aabf-d366125f4ebb"
	Apr 20 00:41:02 ha-095800 kubelet[2216]: I0420 00:41:02.472018    2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7v5m\" (UniqueName: \"kubernetes.io/projected/ebbb6a5c-8671-4230-aabf-d366125f4ebb-kube-api-access-b7v5m\") pod \"busybox-fc5497c4f-78zdr\" (UID: \"ebbb6a5c-8671-4230-aabf-d366125f4ebb\") " pod="default/busybox-fc5497c4f-78zdr"
	Apr 20 00:41:03 ha-095800 kubelet[2216]: E0420 00:41:03.095449    2216 projected.go:200] Error preparing data for projected volume kube-api-access-b7v5m for pod default/busybox-fc5497c4f-78zdr: failed to fetch token: pod "busybox-fc5497c4f-78zdr" not found
	Apr 20 00:41:03 ha-095800 kubelet[2216]: E0420 00:41:03.097412    2216 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebbb6a5c-8671-4230-aabf-d366125f4ebb-kube-api-access-b7v5m podName:ebbb6a5c-8671-4230-aabf-d366125f4ebb nodeName:}" failed. No retries permitted until 2024-04-20 00:41:03.595741846 +0000 UTC m=+509.538710694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b7v5m" (UniqueName: "kubernetes.io/projected/ebbb6a5c-8671-4230-aabf-d366125f4ebb-kube-api-access-b7v5m") pod "busybox-fc5497c4f-78zdr" (UID: "ebbb6a5c-8671-4230-aabf-d366125f4ebb") : failed to fetch token: pod "busybox-fc5497c4f-78zdr" not found
	Apr 20 00:41:03 ha-095800 kubelet[2216]: I0420 00:41:03.178786    2216 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-b7v5m\" (UniqueName: \"kubernetes.io/projected/ebbb6a5c-8671-4230-aabf-d366125f4ebb-kube-api-access-b7v5m\") on node \"ha-095800\" DevicePath \"\""
	Apr 20 00:41:03 ha-095800 kubelet[2216]: I0420 00:41:03.639062    2216 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="534cd974048a518352c11c7b4010b28e8e1f400ad1f4f9b6c123ccf10f57bcdb"
	Apr 20 00:41:04 ha-095800 kubelet[2216]: I0420 00:41:04.307987    2216 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebbb6a5c-8671-4230-aabf-d366125f4ebb" path="/var/lib/kubelet/pods/ebbb6a5c-8671-4230-aabf-d366125f4ebb/volumes"
	Apr 20 00:41:05 ha-095800 kubelet[2216]: I0420 00:41:05.744193    2216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-l275w" podStartSLOduration=2.6383838109999997 podStartE2EDuration="3.744170936s" podCreationTimestamp="2024-04-20 00:41:02 +0000 UTC" firstStartedPulling="2024-04-20 00:41:03.704172956 +0000 UTC m=+509.647141904" lastFinishedPulling="2024-04-20 00:41:04.809960181 +0000 UTC m=+510.752929029" observedRunningTime="2024-04-20 00:41:05.743629344 +0000 UTC m=+511.686598192" watchObservedRunningTime="2024-04-20 00:41:05.744170936 +0000 UTC m=+511.687139784"
	Apr 20 00:41:34 ha-095800 kubelet[2216]: E0420 00:41:34.316785    2216 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:41:34 ha-095800 kubelet[2216]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:41:34 ha-095800 kubelet[2216]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:41:34 ha-095800 kubelet[2216]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:41:34 ha-095800 kubelet[2216]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 17:41:58.598304   10144 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-095800 -n ha-095800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-095800 -n ha-095800: (11.8709267s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-095800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (67.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (135.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-095800 node start m02 -v=7 --alsologtostderr: exit status 1 (53.4886855s)

                                                
                                                
-- stdout --
	* Starting "ha-095800-m02" control-plane node in "ha-095800" cluster
	* Restarting existing hyperv VM for "ha-095800-m02" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 17:58:40.265132    1140 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0419 17:58:40.267966    1140 out.go:291] Setting OutFile to fd 784 ...
	I0419 17:58:40.290242    1140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 17:58:40.290403    1140 out.go:304] Setting ErrFile to fd 976...
	I0419 17:58:40.290403    1140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 17:58:40.303292    1140 mustload.go:65] Loading cluster: ha-095800
	I0419 17:58:40.310106    1140 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:58:40.310455    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:58:42.347286    1140 main.go:141] libmachine: [stdout =====>] : Off
	
	I0419 17:58:42.347286    1140 main.go:141] libmachine: [stderr =====>] : 
	W0419 17:58:42.347286    1140 host.go:58] "ha-095800-m02" host status: Stopped
	I0419 17:58:42.350567    1140 out.go:177] * Starting "ha-095800-m02" control-plane node in "ha-095800" cluster
	I0419 17:58:42.352737    1140 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 17:58:42.353003    1140 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0419 17:58:42.353003    1140 cache.go:56] Caching tarball of preloaded images
	I0419 17:58:42.353647    1140 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0419 17:58:42.353796    1140 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 17:58:42.354207    1140 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json ...
	I0419 17:58:42.357633    1140 start.go:360] acquireMachinesLock for ha-095800-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 17:58:42.357633    1140 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-095800-m02"
	I0419 17:58:42.357633    1140 start.go:96] Skipping create...Using existing machine configuration
	I0419 17:58:42.357633    1140 fix.go:54] fixHost starting: m02
	I0419 17:58:42.358854    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:58:44.430513    1140 main.go:141] libmachine: [stdout =====>] : Off
	
	I0419 17:58:44.430513    1140 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:58:44.430735    1140 fix.go:112] recreateIfNeeded on ha-095800-m02: state=Stopped err=<nil>
	W0419 17:58:44.430735    1140 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 17:58:44.433991    1140 out.go:177] * Restarting existing hyperv VM for "ha-095800-m02" ...
	I0419 17:58:44.435560    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-095800-m02
	I0419 17:58:47.447542    1140 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:58:47.447542    1140 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:58:47.447542    1140 main.go:141] libmachine: Waiting for host to start...
	I0419 17:58:47.447542    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:58:49.639779    1140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:58:49.639779    1140 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:58:49.652437    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:58:52.099641    1140 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:58:52.106687    1140 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:58:53.124333    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:58:55.243127    1140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:58:55.256684    1140 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:58:55.257018    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:58:57.735092    1140 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:58:57.735092    1140 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:58:58.747487    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:59:00.852070    1140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:59:00.852070    1140 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:59:00.860664    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:59:03.351647    1140 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:59:03.351647    1140 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:59:04.361803    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:59:06.472991    1140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:59:06.472991    1140 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:59:06.474704    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:59:08.954048    1140 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:59:08.954256    1140 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:59:09.967828    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:59:12.129027    1140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:59:12.129027    1140 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:59:12.129263    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:59:14.619706    1140 main.go:141] libmachine: [stdout =====>] : 172.19.36.69
	
	I0419 17:59:14.619706    1140 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:59:14.634919    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:59:16.692543    1140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:59:16.692543    1140 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:59:16.692810    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:59:19.208566    1140 main.go:141] libmachine: [stdout =====>] : 172.19.36.69
	
	I0419 17:59:19.208630    1140 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:59:19.208630    1140 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json ...
	I0419 17:59:19.211437    1140 machine.go:94] provisionDockerMachine start ...
	I0419 17:59:19.211655    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:59:21.280334    1140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:59:21.280334    1140 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:59:21.280453    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:59:23.835810    1140 main.go:141] libmachine: [stdout =====>] : 172.19.36.69
	
	I0419 17:59:23.835810    1140 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:59:23.841551    1140 main.go:141] libmachine: Using SSH client type: native
	I0419 17:59:23.842305    1140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.36.69 22 <nil> <nil>}
	I0419 17:59:23.842305    1140 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 17:59:23.983326    1140 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0419 17:59:23.983474    1140 buildroot.go:166] provisioning hostname "ha-095800-m02"
	I0419 17:59:23.983474    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:59:26.037119    1140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:59:26.037199    1140 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:59:26.037268    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:59:28.519691    1140 main.go:141] libmachine: [stdout =====>] : 172.19.36.69
	
	I0419 17:59:28.529925    1140 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:59:28.536828    1140 main.go:141] libmachine: Using SSH client type: native
	I0419 17:59:28.536828    1140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.36.69 22 <nil> <nil>}
	I0419 17:59:28.537410    1140 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-095800-m02 && echo "ha-095800-m02" | sudo tee /etc/hostname
	I0419 17:59:28.701511    1140 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-095800-m02
	
	I0419 17:59:28.701646    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:59:30.729541    1140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:59:30.729541    1140 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:59:30.729541    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:59:33.227986    1140 main.go:141] libmachine: [stdout =====>] : 172.19.36.69
	
	I0419 17:59:33.227986    1140 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:59:33.245027    1140 main.go:141] libmachine: Using SSH client type: native
	I0419 17:59:33.245657    1140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.36.69 22 <nil> <nil>}
	I0419 17:59:33.245657    1140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-095800-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-095800-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-095800-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 17:59:33.402928    1140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 17:59:33.403005    1140 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0419 17:59:33.403058    1140 buildroot.go:174] setting up certificates
	I0419 17:59:33.403058    1140 provision.go:84] configureAuth start
	I0419 17:59:33.403250    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state

                                                
                                                
** /stderr **
ha_test.go:422: W0419 17:58:40.265132    1140 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0419 17:58:40.267966    1140 out.go:291] Setting OutFile to fd 784 ...
I0419 17:58:40.290242    1140 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 17:58:40.290403    1140 out.go:304] Setting ErrFile to fd 976...
I0419 17:58:40.290403    1140 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 17:58:40.303292    1140 mustload.go:65] Loading cluster: ha-095800
I0419 17:58:40.310106    1140 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 17:58:40.310455    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
I0419 17:58:42.347286    1140 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0419 17:58:42.347286    1140 main.go:141] libmachine: [stderr =====>] : 
W0419 17:58:42.347286    1140 host.go:58] "ha-095800-m02" host status: Stopped
I0419 17:58:42.350567    1140 out.go:177] * Starting "ha-095800-m02" control-plane node in "ha-095800" cluster
I0419 17:58:42.352737    1140 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0419 17:58:42.353003    1140 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
I0419 17:58:42.353003    1140 cache.go:56] Caching tarball of preloaded images
I0419 17:58:42.353647    1140 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0419 17:58:42.353796    1140 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0419 17:58:42.354207    1140 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json ...
I0419 17:58:42.357633    1140 start.go:360] acquireMachinesLock for ha-095800-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0419 17:58:42.357633    1140 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-095800-m02"
I0419 17:58:42.357633    1140 start.go:96] Skipping create...Using existing machine configuration
I0419 17:58:42.357633    1140 fix.go:54] fixHost starting: m02
I0419 17:58:42.358854    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
I0419 17:58:44.430513    1140 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0419 17:58:44.430513    1140 main.go:141] libmachine: [stderr =====>] : 
I0419 17:58:44.430735    1140 fix.go:112] recreateIfNeeded on ha-095800-m02: state=Stopped err=<nil>
W0419 17:58:44.430735    1140 fix.go:138] unexpected machine state, will restart: <nil>
I0419 17:58:44.433991    1140 out.go:177] * Restarting existing hyperv VM for "ha-095800-m02" ...
I0419 17:58:44.435560    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-095800-m02
I0419 17:58:47.447542    1140 main.go:141] libmachine: [stdout =====>] : 
I0419 17:58:47.447542    1140 main.go:141] libmachine: [stderr =====>] : 
I0419 17:58:47.447542    1140 main.go:141] libmachine: Waiting for host to start...
I0419 17:58:47.447542    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
I0419 17:58:49.639779    1140 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0419 17:58:49.639779    1140 main.go:141] libmachine: [stderr =====>] : 
I0419 17:58:49.652437    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
I0419 17:58:52.099641    1140 main.go:141] libmachine: [stdout =====>] : 
I0419 17:58:52.106687    1140 main.go:141] libmachine: [stderr =====>] : 
I0419 17:58:53.124333    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
I0419 17:58:55.243127    1140 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0419 17:58:55.256684    1140 main.go:141] libmachine: [stderr =====>] : 
I0419 17:58:55.257018    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
I0419 17:58:57.735092    1140 main.go:141] libmachine: [stdout =====>] : 
I0419 17:58:57.735092    1140 main.go:141] libmachine: [stderr =====>] : 
I0419 17:58:58.747487    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
I0419 17:59:00.852070    1140 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0419 17:59:00.852070    1140 main.go:141] libmachine: [stderr =====>] : 
I0419 17:59:00.860664    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
I0419 17:59:03.351647    1140 main.go:141] libmachine: [stdout =====>] : 
I0419 17:59:03.351647    1140 main.go:141] libmachine: [stderr =====>] : 
I0419 17:59:04.361803    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
I0419 17:59:06.472991    1140 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0419 17:59:06.472991    1140 main.go:141] libmachine: [stderr =====>] : 
I0419 17:59:06.474704    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
I0419 17:59:08.954048    1140 main.go:141] libmachine: [stdout =====>] : 
I0419 17:59:08.954256    1140 main.go:141] libmachine: [stderr =====>] : 
I0419 17:59:09.967828    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
I0419 17:59:12.129027    1140 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0419 17:59:12.129027    1140 main.go:141] libmachine: [stderr =====>] : 
I0419 17:59:12.129263    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
I0419 17:59:14.619706    1140 main.go:141] libmachine: [stdout =====>] : 172.19.36.69

                                                
                                                
I0419 17:59:14.619706    1140 main.go:141] libmachine: [stderr =====>] : 
I0419 17:59:14.634919    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
I0419 17:59:16.692543    1140 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0419 17:59:16.692543    1140 main.go:141] libmachine: [stderr =====>] : 
I0419 17:59:16.692810    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
I0419 17:59:19.208566    1140 main.go:141] libmachine: [stdout =====>] : 172.19.36.69

                                                
                                                
I0419 17:59:19.208630    1140 main.go:141] libmachine: [stderr =====>] : 
I0419 17:59:19.208630    1140 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json ...
I0419 17:59:19.211437    1140 machine.go:94] provisionDockerMachine start ...
I0419 17:59:19.211655    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
I0419 17:59:21.280334    1140 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0419 17:59:21.280334    1140 main.go:141] libmachine: [stderr =====>] : 
I0419 17:59:21.280453    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
I0419 17:59:23.835810    1140 main.go:141] libmachine: [stdout =====>] : 172.19.36.69

                                                
                                                
I0419 17:59:23.835810    1140 main.go:141] libmachine: [stderr =====>] : 
I0419 17:59:23.841551    1140 main.go:141] libmachine: Using SSH client type: native
I0419 17:59:23.842305    1140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.36.69 22 <nil> <nil>}
I0419 17:59:23.842305    1140 main.go:141] libmachine: About to run SSH command:
hostname
I0419 17:59:23.983326    1140 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0419 17:59:23.983474    1140 buildroot.go:166] provisioning hostname "ha-095800-m02"
I0419 17:59:23.983474    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
I0419 17:59:26.037119    1140 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0419 17:59:26.037199    1140 main.go:141] libmachine: [stderr =====>] : 
I0419 17:59:26.037268    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
I0419 17:59:28.519691    1140 main.go:141] libmachine: [stdout =====>] : 172.19.36.69

                                                
                                                
I0419 17:59:28.529925    1140 main.go:141] libmachine: [stderr =====>] : 
I0419 17:59:28.536828    1140 main.go:141] libmachine: Using SSH client type: native
I0419 17:59:28.536828    1140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.36.69 22 <nil> <nil>}
I0419 17:59:28.537410    1140 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-095800-m02 && echo "ha-095800-m02" | sudo tee /etc/hostname
I0419 17:59:28.701511    1140 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-095800-m02

                                                
                                                
I0419 17:59:28.701646    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
I0419 17:59:30.729541    1140 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0419 17:59:30.729541    1140 main.go:141] libmachine: [stderr =====>] : 
I0419 17:59:30.729541    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
I0419 17:59:33.227986    1140 main.go:141] libmachine: [stdout =====>] : 172.19.36.69

                                                
                                                
I0419 17:59:33.227986    1140 main.go:141] libmachine: [stderr =====>] : 
I0419 17:59:33.245027    1140 main.go:141] libmachine: Using SSH client type: native
I0419 17:59:33.245657    1140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.36.69 22 <nil> <nil>}
I0419 17:59:33.245657    1140 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-095800-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-095800-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-095800-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I0419 17:59:33.402928    1140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0419 17:59:33.403005    1140 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
I0419 17:59:33.403058    1140 buildroot.go:174] setting up certificates
I0419 17:59:33.403058    1140 provision.go:84] configureAuth start
I0419 17:59:33.403250    1140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-windows-amd64.exe -p ha-095800 node start m02 -v=7 --alsologtostderr": exit status 1
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr: context deadline exceeded (36.7µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr: context deadline exceeded (94.8µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr: context deadline exceeded (92µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:432: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-095800 -n ha-095800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-095800 -n ha-095800: (11.940778s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 logs -n 25: (8.5126821s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| ssh     | ha-095800 ssh -n                                                                                                          | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:52 PDT | 19 Apr 24 17:53 PDT |
	|         | ha-095800-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-095800 cp ha-095800-m03:/home/docker/cp-test.txt                                                                       | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:53 PDT | 19 Apr 24 17:53 PDT |
	|         | ha-095800:/home/docker/cp-test_ha-095800-m03_ha-095800.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-095800 ssh -n                                                                                                          | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:53 PDT | 19 Apr 24 17:53 PDT |
	|         | ha-095800-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-095800 ssh -n ha-095800 sudo cat                                                                                       | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:53 PDT | 19 Apr 24 17:53 PDT |
	|         | /home/docker/cp-test_ha-095800-m03_ha-095800.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-095800 cp ha-095800-m03:/home/docker/cp-test.txt                                                                       | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:53 PDT | 19 Apr 24 17:53 PDT |
	|         | ha-095800-m02:/home/docker/cp-test_ha-095800-m03_ha-095800-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-095800 ssh -n                                                                                                          | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:53 PDT | 19 Apr 24 17:54 PDT |
	|         | ha-095800-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-095800 ssh -n ha-095800-m02 sudo cat                                                                                   | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:54 PDT | 19 Apr 24 17:54 PDT |
	|         | /home/docker/cp-test_ha-095800-m03_ha-095800-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-095800 cp ha-095800-m03:/home/docker/cp-test.txt                                                                       | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:54 PDT | 19 Apr 24 17:54 PDT |
	|         | ha-095800-m04:/home/docker/cp-test_ha-095800-m03_ha-095800-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-095800 ssh -n                                                                                                          | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:54 PDT | 19 Apr 24 17:54 PDT |
	|         | ha-095800-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-095800 ssh -n ha-095800-m04 sudo cat                                                                                   | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:54 PDT | 19 Apr 24 17:54 PDT |
	|         | /home/docker/cp-test_ha-095800-m03_ha-095800-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-095800 cp testdata\cp-test.txt                                                                                         | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:54 PDT | 19 Apr 24 17:54 PDT |
	|         | ha-095800-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-095800 ssh -n                                                                                                          | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:54 PDT | 19 Apr 24 17:55 PDT |
	|         | ha-095800-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-095800 cp ha-095800-m04:/home/docker/cp-test.txt                                                                       | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:55 PDT | 19 Apr 24 17:55 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4282152140\001\cp-test_ha-095800-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-095800 ssh -n                                                                                                          | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:55 PDT | 19 Apr 24 17:55 PDT |
	|         | ha-095800-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-095800 cp ha-095800-m04:/home/docker/cp-test.txt                                                                       | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:55 PDT | 19 Apr 24 17:55 PDT |
	|         | ha-095800:/home/docker/cp-test_ha-095800-m04_ha-095800.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-095800 ssh -n                                                                                                          | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:55 PDT | 19 Apr 24 17:55 PDT |
	|         | ha-095800-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-095800 ssh -n ha-095800 sudo cat                                                                                       | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:55 PDT | 19 Apr 24 17:55 PDT |
	|         | /home/docker/cp-test_ha-095800-m04_ha-095800.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-095800 cp ha-095800-m04:/home/docker/cp-test.txt                                                                       | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:55 PDT | 19 Apr 24 17:56 PDT |
	|         | ha-095800-m02:/home/docker/cp-test_ha-095800-m04_ha-095800-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-095800 ssh -n                                                                                                          | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:56 PDT | 19 Apr 24 17:56 PDT |
	|         | ha-095800-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-095800 ssh -n ha-095800-m02 sudo cat                                                                                   | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:56 PDT | 19 Apr 24 17:56 PDT |
	|         | /home/docker/cp-test_ha-095800-m04_ha-095800-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-095800 cp ha-095800-m04:/home/docker/cp-test.txt                                                                       | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:56 PDT | 19 Apr 24 17:56 PDT |
	|         | ha-095800-m03:/home/docker/cp-test_ha-095800-m04_ha-095800-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-095800 ssh -n                                                                                                          | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:56 PDT | 19 Apr 24 17:56 PDT |
	|         | ha-095800-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-095800 ssh -n ha-095800-m03 sudo cat                                                                                   | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:56 PDT | 19 Apr 24 17:57 PDT |
	|         | /home/docker/cp-test_ha-095800-m04_ha-095800-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-095800 node stop m02 -v=7                                                                                              | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:57 PDT | 19 Apr 24 17:57 PDT |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	| node    | ha-095800 node start m02 -v=7                                                                                             | ha-095800 | minikube1\jenkins | v1.33.0 | 19 Apr 24 17:58 PDT |                     |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 17:29:33
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 17:29:33.737511    6592 out.go:291] Setting OutFile to fd 796 ...
	I0419 17:29:33.738077    6592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 17:29:33.738077    6592 out.go:304] Setting ErrFile to fd 676...
	I0419 17:29:33.738077    6592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 17:29:33.767051    6592 out.go:298] Setting JSON to false
	I0419 17:29:33.770162    6592 start.go:129] hostinfo: {"hostname":"minikube1","uptime":11432,"bootTime":1713561541,"procs":203,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0419 17:29:33.770162    6592 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 17:29:33.776731    6592 out.go:177] * [ha-095800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0419 17:29:33.780567    6592 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 17:29:33.780330    6592 notify.go:220] Checking for updates...
	I0419 17:29:33.782570    6592 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 17:29:33.785497    6592 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0419 17:29:33.794155    6592 out.go:177]   - MINIKUBE_LOCATION=18703
	I0419 17:29:33.800159    6592 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 17:29:33.805983    6592 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 17:29:38.862125    6592 out.go:177] * Using the hyperv driver based on user configuration
	I0419 17:29:38.865579    6592 start.go:297] selected driver: hyperv
	I0419 17:29:38.865679    6592 start.go:901] validating driver "hyperv" against <nil>
	I0419 17:29:38.865679    6592 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 17:29:38.916290    6592 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 17:29:38.916567    6592 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 17:29:38.916567    6592 cni.go:84] Creating CNI manager for ""
	I0419 17:29:38.918279    6592 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0419 17:29:38.918279    6592 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0419 17:29:38.918279    6592 start.go:340] cluster config:
	{Name:ha-095800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0419 17:29:38.918771    6592 iso.go:125] acquiring lock: {Name:mk297f2abb67cbbcd36490c866afe693892d0c05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 17:29:38.923458    6592 out.go:177] * Starting "ha-095800" primary control-plane node in "ha-095800" cluster
	I0419 17:29:38.925781    6592 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 17:29:38.925781    6592 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0419 17:29:38.925781    6592 cache.go:56] Caching tarball of preloaded images
	I0419 17:29:38.926365    6592 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0419 17:29:38.926575    6592 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 17:29:38.927249    6592 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json ...
	I0419 17:29:38.927616    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json: {Name:mk391c2cfb27f78bbb8efde26cda996bf9a124b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:29:38.928919    6592 start.go:360] acquireMachinesLock for ha-095800: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 17:29:38.928919    6592 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-095800"
	I0419 17:29:38.928919    6592 start.go:93] Provisioning new machine with config: &{Name:ha-095800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 17:29:38.928919    6592 start.go:125] createHost starting for "" (driver="hyperv")
	I0419 17:29:38.933257    6592 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 17:29:38.933369    6592 start.go:159] libmachine.API.Create for "ha-095800" (driver="hyperv")
	I0419 17:29:38.933369    6592 client.go:168] LocalClient.Create starting
	I0419 17:29:38.934202    6592 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0419 17:29:38.934524    6592 main.go:141] libmachine: Decoding PEM data...
	I0419 17:29:38.934634    6592 main.go:141] libmachine: Parsing certificate...
	I0419 17:29:38.934956    6592 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0419 17:29:38.935297    6592 main.go:141] libmachine: Decoding PEM data...
	I0419 17:29:38.935352    6592 main.go:141] libmachine: Parsing certificate...
	I0419 17:29:38.935584    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0419 17:29:40.933666    6592 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0419 17:29:40.933776    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:29:40.933889    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0419 17:29:42.612549    6592 main.go:141] libmachine: [stdout =====>] : False
	
	I0419 17:29:42.612549    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:29:42.612890    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0419 17:29:44.087341    6592 main.go:141] libmachine: [stdout =====>] : True
	
	I0419 17:29:44.087437    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:29:44.087518    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0419 17:29:47.563367    6592 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0419 17:29:47.577029    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:29:47.580016    6592 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0419 17:29:48.099763    6592 main.go:141] libmachine: Creating SSH key...
	I0419 17:29:48.222908    6592 main.go:141] libmachine: Creating VM...
	I0419 17:29:48.222908    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0419 17:29:50.994194    6592 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0419 17:29:50.994194    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:29:51.007406    6592 main.go:141] libmachine: Using switch "Default Switch"
	I0419 17:29:51.007551    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0419 17:29:52.700662    6592 main.go:141] libmachine: [stdout =====>] : True
	
	I0419 17:29:52.700662    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:29:52.712365    6592 main.go:141] libmachine: Creating VHD
	I0419 17:29:52.712501    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\fixed.vhd' -SizeBytes 10MB -Fixed
	I0419 17:29:56.237483    6592 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F08DEC83-980D-4BE5-8EA1-B25D5E43548C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0419 17:29:56.251394    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:29:56.251394    6592 main.go:141] libmachine: Writing magic tar header
	I0419 17:29:56.251394    6592 main.go:141] libmachine: Writing SSH key tar header
	I0419 17:29:56.262614    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\disk.vhd' -VHDType Dynamic -DeleteSource
	I0419 17:29:59.296048    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:29:59.296048    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:29:59.308343    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\disk.vhd' -SizeBytes 20000MB
	I0419 17:30:01.712744    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:30:01.712744    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:01.712744    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-095800 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0419 17:30:05.249572    6592 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-095800 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0419 17:30:05.261821    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:05.261821    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-095800 -DynamicMemoryEnabled $false
	I0419 17:30:07.374924    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:30:07.374924    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:07.387730    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-095800 -Count 2
	I0419 17:30:09.435711    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:30:09.435711    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:09.449906    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-095800 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\boot2docker.iso'
	I0419 17:30:11.876539    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:30:11.876539    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:11.889177    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-095800 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\disk.vhd'
	I0419 17:30:14.398913    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:30:14.398913    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:14.398913    6592 main.go:141] libmachine: Starting VM...
	I0419 17:30:14.398913    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-095800
	I0419 17:30:17.322922    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:30:17.322922    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:17.322922    6592 main.go:141] libmachine: Waiting for host to start...
	I0419 17:30:17.322922    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:30:19.471590    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:30:19.471590    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:19.482265    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:30:21.902245    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:30:21.914923    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:22.923983    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:30:25.002814    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:30:25.002814    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:25.015384    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:30:27.470762    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:30:27.481984    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:28.496904    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:30:30.547826    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:30:30.547826    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:30.552836    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:30:32.944468    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:30:32.944468    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:33.952879    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:30:36.076685    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:30:36.076685    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:36.076685    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:30:38.595373    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:30:38.595373    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:39.605798    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:30:41.668562    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:30:41.668562    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:41.668562    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:30:44.076344    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:30:44.076344    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:44.076344    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:30:46.143599    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:30:46.143599    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:46.143599    6592 machine.go:94] provisionDockerMachine start ...
	I0419 17:30:46.155913    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:30:48.202827    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:30:48.215757    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:48.215757    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:30:50.660391    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:30:50.660391    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:50.679559    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:30:50.689982    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.218 22 <nil> <nil>}
	I0419 17:30:50.689982    6592 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 17:30:50.825859    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0419 17:30:50.825917    6592 buildroot.go:166] provisioning hostname "ha-095800"
	I0419 17:30:50.825968    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:30:52.842161    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:30:52.855687    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:52.855805    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:30:55.303389    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:30:55.303389    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:55.323989    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:30:55.324692    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.218 22 <nil> <nil>}
	I0419 17:30:55.324692    6592 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-095800 && echo "ha-095800" | sudo tee /etc/hostname
	I0419 17:30:55.487112    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-095800
	
	I0419 17:30:55.487216    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:30:57.471053    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:30:57.471053    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:57.483072    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:30:59.838191    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:30:59.838191    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:30:59.855926    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:30:59.855926    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.218 22 <nil> <nil>}
	I0419 17:30:59.856540    6592 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-095800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-095800/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-095800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 17:31:00.005424    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 17:31:00.005534    6592 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0419 17:31:00.005534    6592 buildroot.go:174] setting up certificates
	I0419 17:31:00.005614    6592 provision.go:84] configureAuth start
	I0419 17:31:00.005712    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:02.022080    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:02.022080    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:02.033255    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:04.466604    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:04.466604    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:04.479778    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:06.458998    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:06.470979    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:06.470979    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:08.936620    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:08.936620    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:08.949890    6592 provision.go:143] copyHostCerts
	I0419 17:31:08.950044    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0419 17:31:08.950335    6592 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0419 17:31:08.950527    6592 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0419 17:31:08.951082    6592 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0419 17:31:08.952145    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0419 17:31:08.952465    6592 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0419 17:31:08.952545    6592 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0419 17:31:08.952873    6592 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0419 17:31:08.953950    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0419 17:31:08.954506    6592 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0419 17:31:08.954506    6592 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0419 17:31:08.954714    6592 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0419 17:31:08.955735    6592 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-095800 san=[127.0.0.1 172.19.32.218 ha-095800 localhost minikube]
	I0419 17:31:09.094442    6592 provision.go:177] copyRemoteCerts
	I0419 17:31:09.114871    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 17:31:09.114871    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:11.108703    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:11.108703    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:11.120872    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:13.596760    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:13.596760    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:13.609796    6592 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:31:13.721337    6592 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6063999s)
	I0419 17:31:13.721447    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0419 17:31:13.721558    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0419 17:31:13.769222    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0419 17:31:13.770037    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0419 17:31:13.819018    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0419 17:31:13.819610    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 17:31:13.866596    6592 provision.go:87] duration metric: took 13.8608268s to configureAuth
	I0419 17:31:13.866596    6592 buildroot.go:189] setting minikube options for container-runtime
	I0419 17:31:13.867568    6592 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:31:13.867704    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:15.834901    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:15.834901    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:15.834901    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:18.285790    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:18.285869    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:18.291392    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:31:18.292178    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.218 22 <nil> <nil>}
	I0419 17:31:18.292178    6592 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0419 17:31:18.427821    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0419 17:31:18.427937    6592 buildroot.go:70] root file system type: tmpfs
	I0419 17:31:18.428091    6592 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0419 17:31:18.428307    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:20.422020    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:20.422020    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:20.434706    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:22.869543    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:22.869543    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:22.889098    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:31:22.889252    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.218 22 <nil> <nil>}
	I0419 17:31:22.889252    6592 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0419 17:31:23.056795    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0419 17:31:23.056886    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:25.048762    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:25.048762    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:25.048762    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:27.455906    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:27.455906    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:27.479079    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:31:27.479635    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.218 22 <nil> <nil>}
	I0419 17:31:27.479635    6592 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0419 17:31:29.587786    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0419 17:31:29.587863    6592 machine.go:97] duration metric: took 43.4441557s to provisionDockerMachine
	I0419 17:31:29.587894    6592 client.go:171] duration metric: took 1m50.6542478s to LocalClient.Create
	I0419 17:31:29.587990    6592 start.go:167] duration metric: took 1m50.6543445s to libmachine.API.Create "ha-095800"
	I0419 17:31:29.588033    6592 start.go:293] postStartSetup for "ha-095800" (driver="hyperv")
	I0419 17:31:29.588072    6592 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 17:31:29.602279    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 17:31:29.602279    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:31.584924    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:31.584924    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:31.584924    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:34.006039    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:34.006039    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:34.018121    6592 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:31:34.129289    6592 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5269634s)
	I0419 17:31:34.143743    6592 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 17:31:34.152466    6592 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 17:31:34.152466    6592 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0419 17:31:34.152466    6592 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0419 17:31:34.153942    6592 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> 34162.pem in /etc/ssl/certs
	I0419 17:31:34.153942    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /etc/ssl/certs/34162.pem
	I0419 17:31:34.166174    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 17:31:34.186683    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /etc/ssl/certs/34162.pem (1708 bytes)
	I0419 17:31:34.234425    6592 start.go:296] duration metric: took 4.6463804s for postStartSetup
	I0419 17:31:34.238214    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:36.226513    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:36.226513    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:36.226513    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:38.627616    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:38.627616    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:38.639535    6592 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json ...
	I0419 17:31:38.642656    6592 start.go:128] duration metric: took 1m59.713438s to createHost
	I0419 17:31:38.642742    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:40.623935    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:40.623935    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:40.636053    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:43.026686    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:43.026686    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:43.040936    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:31:43.040936    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.218 22 <nil> <nil>}
	I0419 17:31:43.045596    6592 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 17:31:43.186947    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713573103.186063964
	
	I0419 17:31:43.186947    6592 fix.go:216] guest clock: 1713573103.186063964
	I0419 17:31:43.186947    6592 fix.go:229] Guest: 2024-04-19 17:31:43.186063964 -0700 PDT Remote: 2024-04-19 17:31:38.6426563 -0700 PDT m=+125.010437401 (delta=4.543407664s)
	I0419 17:31:43.187472    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:45.151268    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:45.163524    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:45.163742    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:47.513964    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:47.527446    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:47.533827    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:31:47.534866    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.218 22 <nil> <nil>}
	I0419 17:31:47.534866    6592 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713573103
	I0419 17:31:47.692212    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: Sat Apr 20 00:31:43 UTC 2024
	
	I0419 17:31:47.692212    6592 fix.go:236] clock set: Sat Apr 20 00:31:43 UTC 2024
	 (err=<nil>)
	I0419 17:31:47.692212    6592 start.go:83] releasing machines lock for "ha-095800", held for 2m8.7629719s
	I0419 17:31:47.692803    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:49.668473    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:49.668473    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:49.668473    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:52.060494    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:52.074298    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:52.079190    6592 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 17:31:52.079190    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:52.088664    6592 ssh_runner.go:195] Run: cat /version.json
	I0419 17:31:52.088664    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:31:54.149791    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:54.162193    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:54.162193    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:54.162193    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:31:54.162193    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:54.162377    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:31:56.681532    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:56.693565    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:56.693565    6592 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:31:56.720933    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:31:56.722893    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:31:56.723199    6592 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:31:56.790418    6592 ssh_runner.go:235] Completed: cat /version.json: (4.7017426s)
	I0419 17:31:56.805060    6592 ssh_runner.go:195] Run: systemctl --version
	I0419 17:31:57.049513    6592 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9703101s)
	I0419 17:31:57.064050    6592 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 17:31:57.073407    6592 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 17:31:57.084623    6592 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 17:31:57.113231    6592 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 17:31:57.113231    6592 start.go:494] detecting cgroup driver to use...
	I0419 17:31:57.113231    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 17:31:57.165411    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0419 17:31:57.204163    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0419 17:31:57.229829    6592 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0419 17:31:57.243691    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0419 17:31:57.277897    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 17:31:57.313480    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0419 17:31:57.347097    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 17:31:57.379463    6592 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 17:31:57.415957    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0419 17:31:57.451592    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0419 17:31:57.488905    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0419 17:31:57.521807    6592 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 17:31:57.558339    6592 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 17:31:57.591650    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:31:57.784417    6592 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0419 17:31:57.817224    6592 start.go:494] detecting cgroup driver to use...
	I0419 17:31:57.830914    6592 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0419 17:31:57.869207    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 17:31:57.900389    6592 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 17:31:57.952329    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 17:31:57.992892    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 17:31:58.034281    6592 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0419 17:31:58.103157    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 17:31:58.128173    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 17:31:58.177907    6592 ssh_runner.go:195] Run: which cri-dockerd
	I0419 17:31:58.211830    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0419 17:31:58.230585    6592 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0419 17:31:58.281983    6592 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0419 17:31:58.482222    6592 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0419 17:31:58.670765    6592 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0419 17:31:58.670765    6592 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0419 17:31:58.716229    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:31:58.909018    6592 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 17:32:01.409530    6592 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5005051s)
	I0419 17:32:01.422052    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0419 17:32:01.457845    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 17:32:01.497065    6592 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0419 17:32:01.705185    6592 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0419 17:32:01.904644    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:32:02.102021    6592 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0419 17:32:02.146347    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 17:32:02.183141    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:32:02.377075    6592 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0419 17:32:02.484776    6592 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0419 17:32:02.503160    6592 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0419 17:32:02.511881    6592 start.go:562] Will wait 60s for crictl version
	I0419 17:32:02.527914    6592 ssh_runner.go:195] Run: which crictl
	I0419 17:32:02.546721    6592 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 17:32:02.601272    6592 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0419 17:32:02.612044    6592 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 17:32:02.652626    6592 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 17:32:02.688296    6592 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0419 17:32:02.688407    6592 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0419 17:32:02.693273    6592 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0419 17:32:02.693273    6592 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0419 17:32:02.693461    6592 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0419 17:32:02.693496    6592 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8c:b9:25 Flags:up|broadcast|multicast|running}
	I0419 17:32:02.696305    6592 ip.go:210] interface addr: fe80::ce04:318e:a1d8:4460/64
	I0419 17:32:02.696305    6592 ip.go:210] interface addr: 172.19.32.1/20
	I0419 17:32:02.712048    6592 ssh_runner.go:195] Run: grep 172.19.32.1	host.minikube.internal$ /etc/hosts
	I0419 17:32:02.718407    6592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.32.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 17:32:02.752465    6592 kubeadm.go:877] updating cluster {Name:ha-095800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:default APIServerHAVIP
:172.19.47.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.32.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 17:32:02.752465    6592 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 17:32:02.763495    6592 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0419 17:32:02.781982    6592 docker.go:685] Got preloaded images: 
	I0419 17:32:02.781982    6592 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0419 17:32:02.795234    6592 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0419 17:32:02.826514    6592 ssh_runner.go:195] Run: which lz4
	I0419 17:32:02.829232    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0419 17:32:02.846708    6592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0419 17:32:02.854109    6592 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0419 17:32:02.854302    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0419 17:32:05.022924    6592 docker.go:649] duration metric: took 2.1936873s to copy over tarball
	I0419 17:32:05.040914    6592 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0419 17:32:13.700030    6592 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6590938s)
	I0419 17:32:13.700030    6592 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0419 17:32:13.771028    6592 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0419 17:32:13.806461    6592 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0419 17:32:13.855109    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:32:14.070585    6592 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 17:32:17.549912    6592 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.4792825s)
	I0419 17:32:17.560824    6592 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0419 17:32:17.589467    6592 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0419 17:32:17.589467    6592 cache_images.go:84] Images are preloaded, skipping loading
	I0419 17:32:17.589467    6592 kubeadm.go:928] updating node { 172.19.32.218 8443 v1.30.0 docker true true} ...
	I0419 17:32:17.589996    6592 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-095800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.32.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:default APIServerHAVIP:172.19.47.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 17:32:17.601156    6592 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0419 17:32:17.638404    6592 cni.go:84] Creating CNI manager for ""
	I0419 17:32:17.638470    6592 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0419 17:32:17.638510    6592 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 17:32:17.638553    6592 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.32.218 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-095800 NodeName:ha-095800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.32.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.32.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 17:32:17.638849    6592 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.32.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-095800"
	  kubeletExtraArgs:
	    node-ip: 172.19.32.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.32.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 17:32:17.638849    6592 kube-vip.go:111] generating kube-vip config ...
	I0419 17:32:17.649737    6592 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0419 17:32:17.678556    6592 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0419 17:32:17.678863    6592 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.47.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0419 17:32:17.692988    6592 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 17:32:17.710078    6592 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 17:32:17.724534    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0419 17:32:17.743387    6592 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0419 17:32:17.772504    6592 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 17:32:17.800953    6592 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0419 17:32:17.830227    6592 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1351 bytes)
	I0419 17:32:17.877176    6592 ssh_runner.go:195] Run: grep 172.19.47.254	control-plane.minikube.internal$ /etc/hosts
	I0419 17:32:17.883915    6592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.47.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 17:32:17.918845    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:32:18.130718    6592 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 17:32:18.160872    6592 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800 for IP: 172.19.32.218
	I0419 17:32:18.160929    6592 certs.go:194] generating shared ca certs ...
	I0419 17:32:18.160929    6592 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:32:18.161559    6592 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0419 17:32:18.161902    6592 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0419 17:32:18.161902    6592 certs.go:256] generating profile certs ...
	I0419 17:32:18.162602    6592 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\client.key
	I0419 17:32:18.162602    6592 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\client.crt with IP's: []
	I0419 17:32:18.320917    6592 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\client.crt ...
	I0419 17:32:18.320917    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\client.crt: {Name:mk711b752ff52da904c50e38439fdc0151dc3ec3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:32:18.321965    6592 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\client.key ...
	I0419 17:32:18.321965    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\client.key: {Name:mk037074c22d8f8025321a73c62f0358f708eddd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:32:18.323722    6592 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.8667e9b8
	I0419 17:32:18.324788    6592 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.8667e9b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.32.218 172.19.47.254]
	I0419 17:32:18.424402    6592 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.8667e9b8 ...
	I0419 17:32:18.424402    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.8667e9b8: {Name:mkdbd8a4ad7a7a81f6e8f1b50d58f2d3833f9d81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:32:18.427894    6592 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.8667e9b8 ...
	I0419 17:32:18.427894    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.8667e9b8: {Name:mkb6551bebeb36a10c30482ef6ea1a13a9456a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:32:18.429217    6592 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.8667e9b8 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt
	I0419 17:32:18.434955    6592 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.8667e9b8 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key
	I0419 17:32:18.441440    6592 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key
	I0419 17:32:18.441440    6592 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.crt with IP's: []
	I0419 17:32:18.551033    6592 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.crt ...
	I0419 17:32:18.551033    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.crt: {Name:mk74da4d53e00801e4765e0c25e4bcf60f62806e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:32:18.555281    6592 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key ...
	I0419 17:32:18.555281    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key: {Name:mkdc1851d74dcae8a8a9dd44613b192a8632ad57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:32:18.556589    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 17:32:18.557021    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0419 17:32:18.557272    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 17:32:18.557407    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 17:32:18.557407    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0419 17:32:18.557407    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0419 17:32:18.557407    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0419 17:32:18.560275    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0419 17:32:18.566549    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem (1338 bytes)
	W0419 17:32:18.567271    6592 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416_empty.pem, impossibly tiny 0 bytes
	I0419 17:32:18.567494    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0419 17:32:18.567494    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0419 17:32:18.567494    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0419 17:32:18.568251    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0419 17:32:18.568397    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem (1708 bytes)
	I0419 17:32:18.568397    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem -> /usr/share/ca-certificates/3416.pem
	I0419 17:32:18.569025    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /usr/share/ca-certificates/34162.pem
	I0419 17:32:18.569158    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:32:18.569381    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 17:32:18.623928    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 17:32:18.673142    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 17:32:18.718851    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 17:32:18.768839    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0419 17:32:18.819335    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0419 17:32:18.868897    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 17:32:18.914112    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0419 17:32:18.960195    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem --> /usr/share/ca-certificates/3416.pem (1338 bytes)
	I0419 17:32:19.005762    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /usr/share/ca-certificates/34162.pem (1708 bytes)
	I0419 17:32:19.053902    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 17:32:19.094611    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0419 17:32:19.139248    6592 ssh_runner.go:195] Run: openssl version
	I0419 17:32:19.164542    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 17:32:19.197565    6592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:32:19.204064    6592 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 20 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:32:19.217377    6592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:32:19.240619    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 17:32:19.272150    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3416.pem && ln -fs /usr/share/ca-certificates/3416.pem /etc/ssl/certs/3416.pem"
	I0419 17:32:19.304312    6592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3416.pem
	I0419 17:32:19.311837    6592 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:10 /usr/share/ca-certificates/3416.pem
	I0419 17:32:19.326934    6592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3416.pem
	I0419 17:32:19.349883    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3416.pem /etc/ssl/certs/51391683.0"
	I0419 17:32:19.385234    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34162.pem && ln -fs /usr/share/ca-certificates/34162.pem /etc/ssl/certs/34162.pem"
	I0419 17:32:19.427826    6592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34162.pem
	I0419 17:32:19.437780    6592 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:10 /usr/share/ca-certificates/34162.pem
	I0419 17:32:19.451063    6592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34162.pem
	I0419 17:32:19.475648    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34162.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 17:32:19.521818    6592 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 17:32:19.528492    6592 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 17:32:19.528741    6592 kubeadm.go:391] StartCluster: {Name:ha-095800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:default APIServerHAVIP:17
2.19.47.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.32.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 17:32:19.538401    6592 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0419 17:32:19.570925    6592 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0419 17:32:19.606816    6592 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0419 17:32:19.638164    6592 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0419 17:32:19.655153    6592 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 17:32:19.655153    6592 kubeadm.go:156] found existing configuration files:
	
	I0419 17:32:19.666376    6592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0419 17:32:19.684915    6592 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 17:32:19.698874    6592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0419 17:32:19.732844    6592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0419 17:32:19.752165    6592 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 17:32:19.764074    6592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0419 17:32:19.796662    6592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0419 17:32:19.805026    6592 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 17:32:19.825011    6592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0419 17:32:19.860236    6592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0419 17:32:19.876952    6592 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 17:32:19.890910    6592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0419 17:32:19.907941    6592 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0419 17:32:20.388361    6592 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0419 17:32:34.760431    6592 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0419 17:32:34.760599    6592 kubeadm.go:309] [preflight] Running pre-flight checks
	I0419 17:32:34.760599    6592 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0419 17:32:34.760599    6592 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0419 17:32:34.761164    6592 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0419 17:32:34.761402    6592 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0419 17:32:34.764399    6592 out.go:204]   - Generating certificates and keys ...
	I0419 17:32:34.764654    6592 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0419 17:32:34.764856    6592 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0419 17:32:34.764856    6592 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0419 17:32:34.764856    6592 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0419 17:32:34.764856    6592 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0419 17:32:34.764856    6592 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0419 17:32:34.765495    6592 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0419 17:32:34.765673    6592 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-095800 localhost] and IPs [172.19.32.218 127.0.0.1 ::1]
	I0419 17:32:34.765673    6592 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0419 17:32:34.765673    6592 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-095800 localhost] and IPs [172.19.32.218 127.0.0.1 ::1]
	I0419 17:32:34.766427    6592 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0419 17:32:34.766615    6592 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0419 17:32:34.766768    6592 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0419 17:32:34.766910    6592 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0419 17:32:34.766910    6592 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0419 17:32:34.766910    6592 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0419 17:32:34.766910    6592 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0419 17:32:34.767431    6592 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0419 17:32:34.767522    6592 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0419 17:32:34.767619    6592 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0419 17:32:34.767619    6592 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0419 17:32:34.769918    6592 out.go:204]   - Booting up control plane ...
	I0419 17:32:34.769918    6592 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0419 17:32:34.769918    6592 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0419 17:32:34.770476    6592 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0419 17:32:34.770652    6592 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 17:32:34.770652    6592 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 17:32:34.770652    6592 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0419 17:32:34.770652    6592 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0419 17:32:34.770652    6592 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0419 17:32:34.771184    6592 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.004096977s
	I0419 17:32:34.771326    6592 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0419 17:32:34.771431    6592 kubeadm.go:309] [api-check] The API server is healthy after 7.502808092s
	I0419 17:32:34.771431    6592 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0419 17:32:34.771431    6592 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0419 17:32:34.771431    6592 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0419 17:32:34.772260    6592 kubeadm.go:309] [mark-control-plane] Marking the node ha-095800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0419 17:32:34.772569    6592 kubeadm.go:309] [bootstrap-token] Using token: 1vlilj.5gxlnz6bb5qp1ob8
	I0419 17:32:34.774298    6592 out.go:204]   - Configuring RBAC rules ...
	I0419 17:32:34.775110    6592 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0419 17:32:34.775306    6592 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0419 17:32:34.775306    6592 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0419 17:32:34.775850    6592 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0419 17:32:34.776046    6592 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0419 17:32:34.776046    6592 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0419 17:32:34.776046    6592 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0419 17:32:34.776602    6592 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0419 17:32:34.776602    6592 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0419 17:32:34.776602    6592 kubeadm.go:309] 
	I0419 17:32:34.776899    6592 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0419 17:32:34.776949    6592 kubeadm.go:309] 
	I0419 17:32:34.777207    6592 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0419 17:32:34.777261    6592 kubeadm.go:309] 
	I0419 17:32:34.777421    6592 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0419 17:32:34.777602    6592 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0419 17:32:34.777804    6592 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0419 17:32:34.777841    6592 kubeadm.go:309] 
	I0419 17:32:34.778021    6592 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0419 17:32:34.778054    6592 kubeadm.go:309] 
	I0419 17:32:34.778245    6592 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0419 17:32:34.778280    6592 kubeadm.go:309] 
	I0419 17:32:34.778448    6592 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0419 17:32:34.778734    6592 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0419 17:32:34.778979    6592 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0419 17:32:34.779011    6592 kubeadm.go:309] 
	I0419 17:32:34.779261    6592 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0419 17:32:34.779384    6592 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0419 17:32:34.779384    6592 kubeadm.go:309] 
	I0419 17:32:34.779507    6592 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 1vlilj.5gxlnz6bb5qp1ob8 \
	I0419 17:32:34.779632    6592 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 \
	I0419 17:32:34.779662    6592 kubeadm.go:309] 	--control-plane 
	I0419 17:32:34.779694    6592 kubeadm.go:309] 
	I0419 17:32:34.779785    6592 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0419 17:32:34.779817    6592 kubeadm.go:309] 
	I0419 17:32:34.779847    6592 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 1vlilj.5gxlnz6bb5qp1ob8 \
	I0419 17:32:34.779847    6592 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 
	I0419 17:32:34.779847    6592 cni.go:84] Creating CNI manager for ""
	I0419 17:32:34.779847    6592 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0419 17:32:34.782298    6592 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0419 17:32:34.801152    6592 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0419 17:32:34.809572    6592 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0419 17:32:34.809652    6592 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0419 17:32:34.857691    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0419 17:32:35.589912    6592 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0419 17:32:35.604538    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:35.607752    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-095800 minikube.k8s.io/updated_at=2024_04_19T17_32_35_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=ha-095800 minikube.k8s.io/primary=true
	I0419 17:32:35.624924    6592 ops.go:34] apiserver oom_adj: -16
	I0419 17:32:35.857311    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:36.375335    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:36.864680    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:37.363004    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:37.851042    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:38.358165    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:38.867346    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:39.355457    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:39.864501    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:40.360093    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:40.855528    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:41.353833    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:41.862142    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:42.366966    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:42.856795    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:43.350865    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:43.857246    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:44.364032    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:44.861124    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:45.365278    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:45.858185    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:46.360562    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:46.853694    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 17:32:47.005722    6592 kubeadm.go:1107] duration metric: took 11.4157203s to wait for elevateKubeSystemPrivileges
	W0419 17:32:47.005844    6592 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0419 17:32:47.005844    6592 kubeadm.go:393] duration metric: took 27.4770345s to StartCluster
	I0419 17:32:47.005900    6592 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:32:47.006087    6592 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 17:32:47.008060    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:32:47.010105    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0419 17:32:47.010105    6592 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.32.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 17:32:47.010105    6592 start.go:240] waiting for startup goroutines ...
	I0419 17:32:47.010105    6592 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0419 17:32:47.010105    6592 addons.go:69] Setting default-storageclass=true in profile "ha-095800"
	I0419 17:32:47.010105    6592 addons.go:69] Setting storage-provisioner=true in profile "ha-095800"
	I0419 17:32:47.010105    6592 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-095800"
	I0419 17:32:47.010105    6592 addons.go:234] Setting addon storage-provisioner=true in "ha-095800"
	I0419 17:32:47.010105    6592 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:32:47.010105    6592 host.go:66] Checking if "ha-095800" exists ...
	I0419 17:32:47.011167    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:32:47.011167    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:32:47.201363    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.32.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0419 17:32:47.647098    6592 start.go:946] {"host.minikube.internal": 172.19.32.1} host record injected into CoreDNS's ConfigMap
	I0419 17:32:49.139169    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:32:49.146189    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:32:49.146246    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:32:49.146295    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:32:49.148387    6592 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 17:32:49.147368    6592 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 17:32:49.151218    6592 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 17:32:49.151218    6592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0419 17:32:49.151218    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:32:49.151218    6592 kapi.go:59] client config for ha-095800: &rest.Config{Host:"https://172.19.47.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-095800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-095800\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c35620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 17:32:49.153289    6592 cert_rotation.go:137] Starting client certificate rotation controller
	I0419 17:32:49.153604    6592 addons.go:234] Setting addon default-storageclass=true in "ha-095800"
	I0419 17:32:49.153604    6592 host.go:66] Checking if "ha-095800" exists ...
	I0419 17:32:49.155387    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:32:51.332326    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:32:51.345224    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:32:51.345391    6592 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0419 17:32:51.345456    6592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0419 17:32:51.345456    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:32:51.487983    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:32:51.487983    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:32:51.487983    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:32:53.536400    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:32:53.536400    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:32:53.536400    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:32:54.049096    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:32:54.050078    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:32:54.050333    6592 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:32:54.191171    6592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 17:32:56.135414    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:32:56.135414    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:32:56.135414    6592 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:32:56.289538    6592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0419 17:32:56.466935    6592 round_trippers.go:463] GET https://172.19.47.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0419 17:32:56.466996    6592 round_trippers.go:469] Request Headers:
	I0419 17:32:56.467048    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:32:56.467048    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:32:56.483370    6592 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0419 17:32:56.484634    6592 round_trippers.go:463] PUT https://172.19.47.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0419 17:32:56.484677    6592 round_trippers.go:469] Request Headers:
	I0419 17:32:56.484760    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:32:56.484863    6592 round_trippers.go:473]     Content-Type: application/json
	I0419 17:32:56.484863    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:32:56.487612    6592 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:32:56.492063    6592 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0419 17:32:56.495558    6592 addons.go:505] duration metric: took 9.4854287s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0419 17:32:56.495633    6592 start.go:245] waiting for cluster config update ...
	I0419 17:32:56.495705    6592 start.go:254] writing updated cluster config ...
	I0419 17:32:56.499234    6592 out.go:177] 
	I0419 17:32:56.509956    6592 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:32:56.510141    6592 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json ...
	I0419 17:32:56.525772    6592 out.go:177] * Starting "ha-095800-m02" control-plane node in "ha-095800" cluster
	I0419 17:32:56.534136    6592 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 17:32:56.536794    6592 cache.go:56] Caching tarball of preloaded images
	I0419 17:32:56.537438    6592 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0419 17:32:56.537466    6592 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 17:32:56.537466    6592 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json ...
	I0419 17:32:56.540940    6592 start.go:360] acquireMachinesLock for ha-095800-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 17:32:56.541147    6592 start.go:364] duration metric: took 172.6µs to acquireMachinesLock for "ha-095800-m02"
	I0419 17:32:56.541391    6592 start.go:93] Provisioning new machine with config: &{Name:ha-095800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:def
ault APIServerHAVIP:172.19.47.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.32.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C
:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 17:32:56.541453    6592 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0419 17:32:56.543448    6592 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 17:32:56.543996    6592 start.go:159] libmachine.API.Create for "ha-095800" (driver="hyperv")
	I0419 17:32:56.544099    6592 client.go:168] LocalClient.Create starting
	I0419 17:32:56.544628    6592 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0419 17:32:56.544628    6592 main.go:141] libmachine: Decoding PEM data...
	I0419 17:32:56.544628    6592 main.go:141] libmachine: Parsing certificate...
	I0419 17:32:56.545234    6592 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0419 17:32:56.545234    6592 main.go:141] libmachine: Decoding PEM data...
	I0419 17:32:56.545234    6592 main.go:141] libmachine: Parsing certificate...
	I0419 17:32:56.545234    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0419 17:32:58.413308    6592 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0419 17:32:58.413308    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:32:58.413308    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0419 17:33:00.149585    6592 main.go:141] libmachine: [stdout =====>] : False
	
	I0419 17:33:00.149585    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:00.149585    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0419 17:33:01.608158    6592 main.go:141] libmachine: [stdout =====>] : True
	
	I0419 17:33:01.608158    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:01.617518    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0419 17:33:05.085817    6592 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0419 17:33:05.085817    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:05.088839    6592 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0419 17:33:05.559590    6592 main.go:141] libmachine: Creating SSH key...
	I0419 17:33:05.664153    6592 main.go:141] libmachine: Creating VM...
	I0419 17:33:05.664153    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0419 17:33:08.443948    6592 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0419 17:33:08.457064    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:08.457064    6592 main.go:141] libmachine: Using switch "Default Switch"
	I0419 17:33:08.457232    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0419 17:33:10.179571    6592 main.go:141] libmachine: [stdout =====>] : True
	
	I0419 17:33:10.179571    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:10.179571    6592 main.go:141] libmachine: Creating VHD
	I0419 17:33:10.179571    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0419 17:33:13.789883    6592 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : ED82E740-B20D-44DE-BD86-3F701B42C30A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0419 17:33:13.789883    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:13.789883    6592 main.go:141] libmachine: Writing magic tar header
	I0419 17:33:13.789883    6592 main.go:141] libmachine: Writing SSH key tar header
	I0419 17:33:13.790788    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0419 17:33:16.894244    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:16.894244    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:16.895366    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\disk.vhd' -SizeBytes 20000MB
	I0419 17:33:19.361031    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:19.361031    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:19.373223    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-095800-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0419 17:33:22.926626    6592 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-095800-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0419 17:33:22.939521    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:22.939521    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-095800-m02 -DynamicMemoryEnabled $false
	I0419 17:33:25.070246    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:25.082953    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:25.083076    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-095800-m02 -Count 2
	I0419 17:33:27.181634    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:27.181688    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:27.181688    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-095800-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\boot2docker.iso'
	I0419 17:33:29.709958    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:29.709958    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:29.710052    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-095800-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\disk.vhd'
	I0419 17:33:32.283608    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:32.297309    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:32.297309    6592 main.go:141] libmachine: Starting VM...
	I0419 17:33:32.297479    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-095800-m02
	I0419 17:33:35.382480    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:35.383660    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:35.383733    6592 main.go:141] libmachine: Waiting for host to start...
	I0419 17:33:35.383733    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:33:37.586849    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:33:37.586849    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:37.592459    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:33:40.064917    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:40.064917    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:41.070094    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:33:43.231331    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:33:43.231331    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:43.231556    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:33:45.716749    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:45.716749    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:46.718541    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:33:48.836277    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:33:48.837376    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:48.837376    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:33:51.296561    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:51.296561    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:52.312070    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:33:54.403261    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:33:54.412274    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:54.412274    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:33:56.897221    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:33:56.897221    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:33:57.913095    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:00.077382    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:00.077382    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:00.084599    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:02.611899    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:02.623900    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:02.624053    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:04.668991    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:04.668991    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:04.680368    6592 machine.go:94] provisionDockerMachine start ...
	I0419 17:34:04.680459    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:06.757918    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:06.757918    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:06.770308    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:09.238359    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:09.238359    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:09.256838    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:34:09.257560    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.39.106 22 <nil> <nil>}
	I0419 17:34:09.257560    6592 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 17:34:09.401524    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0419 17:34:09.401524    6592 buildroot.go:166] provisioning hostname "ha-095800-m02"
	I0419 17:34:09.401524    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:11.457086    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:11.457086    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:11.468691    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:13.954004    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:13.967037    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:13.973112    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:34:13.973891    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.39.106 22 <nil> <nil>}
	I0419 17:34:13.973891    6592 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-095800-m02 && echo "ha-095800-m02" | sudo tee /etc/hostname
	I0419 17:34:14.137164    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-095800-m02
	
	I0419 17:34:14.137293    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:16.184710    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:16.184710    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:16.184710    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:18.663350    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:18.663350    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:18.681601    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:34:18.682182    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.39.106 22 <nil> <nil>}
	I0419 17:34:18.682182    6592 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-095800-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-095800-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-095800-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 17:34:18.838880    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 17:34:18.838880    6592 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0419 17:34:18.838880    6592 buildroot.go:174] setting up certificates
	I0419 17:34:18.838880    6592 provision.go:84] configureAuth start
	I0419 17:34:18.838880    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:20.912372    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:20.912372    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:20.927907    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:23.411852    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:23.411852    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:23.423900    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:25.508105    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:25.508105    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:25.510392    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:28.012640    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:28.025708    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:28.025813    6592 provision.go:143] copyHostCerts
	I0419 17:34:28.025813    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0419 17:34:28.026369    6592 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0419 17:34:28.026369    6592 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0419 17:34:28.026842    6592 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0419 17:34:28.028066    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0419 17:34:28.028066    6592 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0419 17:34:28.028066    6592 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0419 17:34:28.028610    6592 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0419 17:34:28.029763    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0419 17:34:28.030023    6592 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0419 17:34:28.030023    6592 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0419 17:34:28.030499    6592 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0419 17:34:28.031502    6592 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-095800-m02 san=[127.0.0.1 172.19.39.106 ha-095800-m02 localhost minikube]
	I0419 17:34:28.208607    6592 provision.go:177] copyRemoteCerts
	I0419 17:34:28.216418    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 17:34:28.216418    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:30.273993    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:30.286804    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:30.286804    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:32.809138    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:32.809273    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:32.809332    6592 sshutil.go:53] new ssh client: &{IP:172.19.39.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\id_rsa Username:docker}
	I0419 17:34:32.926461    6592 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7100316s)
	I0419 17:34:32.926461    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0419 17:34:32.927040    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0419 17:34:32.973031    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0419 17:34:32.973573    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0419 17:34:33.023523    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0419 17:34:33.024749    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0419 17:34:33.074937    6592 provision.go:87] duration metric: took 14.2360223s to configureAuth
	I0419 17:34:33.075035    6592 buildroot.go:189] setting minikube options for container-runtime
	I0419 17:34:33.075110    6592 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:34:33.075110    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:35.106629    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:35.106629    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:35.106711    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:37.526573    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:37.535161    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:37.545127    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:34:37.545298    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.39.106 22 <nil> <nil>}
	I0419 17:34:37.545298    6592 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0419 17:34:37.682149    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0419 17:34:37.682149    6592 buildroot.go:70] root file system type: tmpfs
	I0419 17:34:37.682716    6592 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0419 17:34:37.682889    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:39.714333    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:39.714333    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:39.714333    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:42.177904    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:42.178005    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:42.182598    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:34:42.182598    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.39.106 22 <nil> <nil>}
	I0419 17:34:42.182598    6592 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.32.218"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0419 17:34:42.356000    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.32.218
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0419 17:34:42.356074    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:44.361364    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:44.361364    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:44.374941    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:46.828897    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:46.828897    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:46.835117    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:34:46.835590    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.39.106 22 <nil> <nil>}
	I0419 17:34:46.835590    6592 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0419 17:34:48.975232    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0419 17:34:48.975232    6592 machine.go:97] duration metric: took 44.2947577s to provisionDockerMachine
	I0419 17:34:48.975232    6592 client.go:171] duration metric: took 1m52.430861s to LocalClient.Create
	I0419 17:34:48.975232    6592 start.go:167] duration metric: took 1m52.430964s to libmachine.API.Create "ha-095800"
	I0419 17:34:48.975789    6592 start.go:293] postStartSetup for "ha-095800-m02" (driver="hyperv")
	I0419 17:34:48.975857    6592 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 17:34:48.990268    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 17:34:48.990268    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:51.012633    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:51.012633    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:51.012778    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:53.504206    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:53.504206    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:53.504309    6592 sshutil.go:53] new ssh client: &{IP:172.19.39.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\id_rsa Username:docker}
	I0419 17:34:53.623546    6592 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6332666s)
	I0419 17:34:53.640312    6592 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 17:34:53.648053    6592 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 17:34:53.648053    6592 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0419 17:34:53.648591    6592 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0419 17:34:53.649634    6592 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> 34162.pem in /etc/ssl/certs
	I0419 17:34:53.649634    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /etc/ssl/certs/34162.pem
	I0419 17:34:53.666517    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 17:34:53.685294    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /etc/ssl/certs/34162.pem (1708 bytes)
	I0419 17:34:53.732503    6592 start.go:296] duration metric: took 4.7567024s for postStartSetup
	I0419 17:34:53.735682    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:34:55.779501    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:34:55.779501    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:55.791981    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:34:58.257053    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:34:58.257053    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:34:58.259551    6592 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json ...
	I0419 17:34:58.276027    6592 start.go:128] duration metric: took 2m1.7340016s to createHost
	I0419 17:34:58.276182    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:35:00.339325    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:35:00.339325    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:00.339410    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:35:02.802652    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:35:02.814096    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:02.820618    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:35:02.821261    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.39.106 22 <nil> <nil>}
	I0419 17:35:02.821261    6592 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 17:35:02.956538    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713573302.947564889
	
	I0419 17:35:02.956538    6592 fix.go:216] guest clock: 1713573302.947564889
	I0419 17:35:02.956538    6592 fix.go:229] Guest: 2024-04-19 17:35:02.947564889 -0700 PDT Remote: 2024-04-19 17:34:58.2761069 -0700 PDT m=+324.643398501 (delta=4.671457989s)
	I0419 17:35:02.956538    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:35:04.981992    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:35:04.993935    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:04.994227    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:35:07.473549    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:35:07.485780    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:07.491553    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:35:07.492737    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.39.106 22 <nil> <nil>}
	I0419 17:35:07.492737    6592 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713573302
	I0419 17:35:07.640720    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: Sat Apr 20 00:35:02 UTC 2024
	
	I0419 17:35:07.640720    6592 fix.go:236] clock set: Sat Apr 20 00:35:02 UTC 2024
	 (err=<nil>)
	I0419 17:35:07.640720    6592 start.go:83] releasing machines lock for "ha-095800-m02", held for 2m11.0992556s
	I0419 17:35:07.641317    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:35:09.679388    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:35:09.688695    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:09.688869    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:35:12.129591    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:35:12.129591    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:12.141739    6592 out.go:177] * Found network options:
	I0419 17:35:12.147015    6592 out.go:177]   - NO_PROXY=172.19.32.218
	W0419 17:35:12.149450    6592 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 17:35:12.152667    6592 out.go:177]   - NO_PROXY=172.19.32.218
	W0419 17:35:12.155023    6592 proxy.go:119] fail to check proxy env: Error ip not in block
	W0419 17:35:12.156655    6592 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 17:35:12.160668    6592 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 17:35:12.161532    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:35:12.171567    6592 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0419 17:35:12.171567    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:35:14.233893    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:35:14.233893    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:14.234033    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:35:14.249163    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:35:14.257059    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:14.257191    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 17:35:16.773275    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:35:16.773275    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:16.786750    6592 sshutil.go:53] new ssh client: &{IP:172.19.39.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\id_rsa Username:docker}
	I0419 17:35:16.810451    6592 main.go:141] libmachine: [stdout =====>] : 172.19.39.106
	
	I0419 17:35:16.812224    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:16.812276    6592 sshutil.go:53] new ssh client: &{IP:172.19.39.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m02\id_rsa Username:docker}
	I0419 17:35:16.938031    6592 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7773509s)
	I0419 17:35:16.938031    6592 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7664522s)
	W0419 17:35:16.938174    6592 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 17:35:16.951426    6592 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 17:35:16.975914    6592 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 17:35:16.982893    6592 start.go:494] detecting cgroup driver to use...
	I0419 17:35:16.982923    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 17:35:17.032304    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0419 17:35:17.071454    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0419 17:35:17.095266    6592 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0419 17:35:17.110092    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0419 17:35:17.148867    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 17:35:17.182282    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0419 17:35:17.220129    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 17:35:17.259774    6592 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 17:35:17.296484    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0419 17:35:17.330959    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0419 17:35:17.366377    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0419 17:35:17.402845    6592 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 17:35:17.438569    6592 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 17:35:17.478067    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:35:17.696568    6592 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0419 17:35:17.731857    6592 start.go:494] detecting cgroup driver to use...
	I0419 17:35:17.747208    6592 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0419 17:35:17.790020    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 17:35:17.830005    6592 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 17:35:17.879049    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 17:35:17.918467    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 17:35:17.962077    6592 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0419 17:35:18.029673    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 17:35:18.056357    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 17:35:18.107216    6592 ssh_runner.go:195] Run: which cri-dockerd
	I0419 17:35:18.129495    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0419 17:35:18.148830    6592 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0419 17:35:18.197633    6592 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0419 17:35:18.402292    6592 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0419 17:35:18.596698    6592 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0419 17:35:18.596698    6592 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0419 17:35:18.642989    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:35:18.855322    6592 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 17:35:21.409619    6592 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5542379s)
	I0419 17:35:21.423805    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0419 17:35:21.465962    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 17:35:21.506351    6592 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0419 17:35:21.718975    6592 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0419 17:35:21.918945    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:35:22.137750    6592 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0419 17:35:22.185651    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 17:35:22.225396    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:35:22.425690    6592 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0419 17:35:22.537635    6592 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0419 17:35:22.547991    6592 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0419 17:35:22.558080    6592 start.go:562] Will wait 60s for crictl version
	I0419 17:35:22.568341    6592 ssh_runner.go:195] Run: which crictl
	I0419 17:35:22.589384    6592 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 17:35:22.632396    6592 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0419 17:35:22.642172    6592 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 17:35:22.698356    6592 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 17:35:22.739421    6592 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0419 17:35:22.742431    6592 out.go:177]   - env NO_PROXY=172.19.32.218
	I0419 17:35:22.744839    6592 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0419 17:35:22.749574    6592 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0419 17:35:22.749574    6592 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0419 17:35:22.749717    6592 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0419 17:35:22.749717    6592 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8c:b9:25 Flags:up|broadcast|multicast|running}
	I0419 17:35:22.751741    6592 ip.go:210] interface addr: fe80::ce04:318e:a1d8:4460/64
	I0419 17:35:22.751741    6592 ip.go:210] interface addr: 172.19.32.1/20
	I0419 17:35:22.760477    6592 ssh_runner.go:195] Run: grep 172.19.32.1	host.minikube.internal$ /etc/hosts
	I0419 17:35:22.773484    6592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.32.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 17:35:22.790772    6592 mustload.go:65] Loading cluster: ha-095800
	I0419 17:35:22.790772    6592 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:35:22.798746    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:35:24.836383    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:35:24.836383    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:24.848764    6592 host.go:66] Checking if "ha-095800" exists ...
	I0419 17:35:24.849609    6592 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800 for IP: 172.19.39.106
	I0419 17:35:24.849609    6592 certs.go:194] generating shared ca certs ...
	I0419 17:35:24.849609    6592 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:35:24.850341    6592 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0419 17:35:24.850592    6592 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0419 17:35:24.850592    6592 certs.go:256] generating profile certs ...
	I0419 17:35:24.851261    6592 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\client.key
	I0419 17:35:24.851261    6592 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.87ccae9f
	I0419 17:35:24.851261    6592 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.87ccae9f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.32.218 172.19.39.106 172.19.47.254]
	I0419 17:35:25.097787    6592 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.87ccae9f ...
	I0419 17:35:25.097787    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.87ccae9f: {Name:mk23b04572e4fd34b587d1df7a9f07c1c4f91844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:35:25.105537    6592 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.87ccae9f ...
	I0419 17:35:25.105537    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.87ccae9f: {Name:mk1ae5628c1bb6755308a3a67f856b296285d46b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:35:25.106782    6592 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.87ccae9f -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt
	I0419 17:35:25.121574    6592 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.87ccae9f -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key
	I0419 17:35:25.123108    6592 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key
	I0419 17:35:25.123108    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 17:35:25.123108    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0419 17:35:25.123705    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 17:35:25.123705    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 17:35:25.124239    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0419 17:35:25.124521    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0419 17:35:25.124567    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0419 17:35:25.124567    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0419 17:35:25.125650    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem (1338 bytes)
	W0419 17:35:25.126122    6592 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416_empty.pem, impossibly tiny 0 bytes
	I0419 17:35:25.126197    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0419 17:35:25.126502    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0419 17:35:25.126820    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0419 17:35:25.127015    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0419 17:35:25.127558    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem (1708 bytes)
	I0419 17:35:25.127756    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem -> /usr/share/ca-certificates/3416.pem
	I0419 17:35:25.127959    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /usr/share/ca-certificates/34162.pem
	I0419 17:35:25.127959    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:35:25.127959    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:35:27.179721    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:35:27.182686    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:27.182792    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:35:29.677269    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:35:29.677269    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:29.691642    6592 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:35:29.808705    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0419 17:35:29.818028    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0419 17:35:29.851852    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0419 17:35:29.860192    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0419 17:35:29.899026    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0419 17:35:29.908596    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0419 17:35:29.944431    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0419 17:35:29.951490    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0419 17:35:29.985304    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0419 17:35:29.991980    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0419 17:35:30.024564    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0419 17:35:30.033365    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0419 17:35:30.057985    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 17:35:30.109679    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 17:35:30.167380    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 17:35:30.206495    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 17:35:30.266020    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0419 17:35:30.311495    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0419 17:35:30.374663    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 17:35:30.429114    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0419 17:35:30.486075    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem --> /usr/share/ca-certificates/3416.pem (1338 bytes)
	I0419 17:35:30.533124    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /usr/share/ca-certificates/34162.pem (1708 bytes)
	I0419 17:35:30.581671    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 17:35:30.625672    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0419 17:35:30.656210    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0419 17:35:30.693315    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0419 17:35:30.727155    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0419 17:35:30.760316    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0419 17:35:30.800343    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0419 17:35:30.831688    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0419 17:35:30.877019    6592 ssh_runner.go:195] Run: openssl version
	I0419 17:35:30.902910    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3416.pem && ln -fs /usr/share/ca-certificates/3416.pem /etc/ssl/certs/3416.pem"
	I0419 17:35:30.937753    6592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3416.pem
	I0419 17:35:30.945533    6592 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:10 /usr/share/ca-certificates/3416.pem
	I0419 17:35:30.958825    6592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3416.pem
	I0419 17:35:30.982855    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3416.pem /etc/ssl/certs/51391683.0"
	I0419 17:35:31.019450    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34162.pem && ln -fs /usr/share/ca-certificates/34162.pem /etc/ssl/certs/34162.pem"
	I0419 17:35:31.053730    6592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34162.pem
	I0419 17:35:31.064997    6592 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:10 /usr/share/ca-certificates/34162.pem
	I0419 17:35:31.079060    6592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34162.pem
	I0419 17:35:31.100760    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34162.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 17:35:31.135906    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 17:35:31.168805    6592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:35:31.176056    6592 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 20 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:35:31.189769    6592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:35:31.212767    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 17:35:31.254986    6592 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 17:35:31.261890    6592 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 17:35:31.262228    6592 kubeadm.go:928] updating node {m02 172.19.39.106 8443 v1.30.0 docker true true} ...
	I0419 17:35:31.262469    6592 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-095800-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:default APIServerHAVIP:172.19.47.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 17:35:31.262533    6592 kube-vip.go:111] generating kube-vip config ...
	I0419 17:35:31.276469    6592 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0419 17:35:31.302571    6592 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0419 17:35:31.302645    6592 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.47.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0419 17:35:31.313717    6592 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 17:35:31.334482    6592 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0419 17:35:31.349549    6592 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0419 17:35:31.374757    6592 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm
	I0419 17:35:31.374757    6592 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet
	I0419 17:35:31.374757    6592 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl
	I0419 17:35:32.388056    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0419 17:35:32.408987    6592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0419 17:35:32.410351    6592 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0419 17:35:32.418290    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0419 17:35:34.055018    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0419 17:35:34.079226    6592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0419 17:35:34.086778    6592 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0419 17:35:34.087039    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0419 17:35:36.092135    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 17:35:36.126993    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0419 17:35:36.140982    6592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0419 17:35:36.151934    6592 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0419 17:35:36.151934    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0419 17:35:36.764876    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0419 17:35:36.782288    6592 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0419 17:35:36.820042    6592 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 17:35:36.851191    6592 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
	I0419 17:35:36.897717    6592 ssh_runner.go:195] Run: grep 172.19.47.254	control-plane.minikube.internal$ /etc/hosts
	I0419 17:35:36.903584    6592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.47.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 17:35:36.939185    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:35:37.132075    6592 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 17:35:37.164877    6592 host.go:66] Checking if "ha-095800" exists ...
	I0419 17:35:37.165610    6592 start.go:316] joinCluster: &{Name:ha-095800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:default APIServerHAVIP:172.
19.47.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.32.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.39.106 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jen
kins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 17:35:37.166199    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0419 17:35:37.166355    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:35:39.198819    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:35:39.198819    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:39.210457    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:35:41.714026    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:35:41.714026    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:35:41.725963    6592 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:35:41.949562    6592 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.7832813s)
	I0419 17:35:41.949562    6592 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.19.39.106 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 17:35:41.949562    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n3zyqk.8dhrqnhr8ufhyc6l --discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-095800-m02 --control-plane --apiserver-advertise-address=172.19.39.106 --apiserver-bind-port=8443"
	I0419 17:36:25.521141    6592 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n3zyqk.8dhrqnhr8ufhyc6l --discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-095800-m02 --control-plane --apiserver-advertise-address=172.19.39.106 --apiserver-bind-port=8443": (43.5714167s)
	I0419 17:36:25.521203    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0419 17:36:26.306474    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-095800-m02 minikube.k8s.io/updated_at=2024_04_19T17_36_26_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=ha-095800 minikube.k8s.io/primary=false
	I0419 17:36:26.471324    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-095800-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0419 17:36:26.617852    6592 start.go:318] duration metric: took 49.4520716s to joinCluster
	I0419 17:36:26.618042    6592 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.19.39.106 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 17:36:26.620452    6592 out.go:177] * Verifying Kubernetes components...
	I0419 17:36:26.618488    6592 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:36:26.634759    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:36:26.965867    6592 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 17:36:26.992399    6592 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 17:36:26.993022    6592 kapi.go:59] client config for ha-095800: &rest.Config{Host:"https://172.19.47.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-095800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-095800\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c35620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0419 17:36:26.993230    6592 kubeadm.go:477] Overriding stale ClientConfig host https://172.19.47.254:8443 with https://172.19.32.218:8443
	I0419 17:36:26.993484    6592 node_ready.go:35] waiting up to 6m0s for node "ha-095800-m02" to be "Ready" ...
	I0419 17:36:26.994061    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:26.994061    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:26.994061    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:26.994061    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:27.009242    6592 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0419 17:36:27.503919    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:27.503985    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:27.503985    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:27.504018    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:27.509503    6592 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:36:27.996078    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:27.996078    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:27.996078    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:27.996078    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:28.002025    6592 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:36:28.507122    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:28.507122    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:28.507122    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:28.507291    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:28.512194    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:36:29.007292    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:29.007292    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:29.007292    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:29.007292    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:29.014078    6592 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:36:29.016864    6592 node_ready.go:53] node "ha-095800-m02" has status "Ready":"False"
	I0419 17:36:29.502218    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:29.502218    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:29.502218    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:29.502218    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:29.506540    6592 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:36:30.007826    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:30.007826    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:30.007826    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:30.007826    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:30.010044    6592 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:36:30.511873    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:30.511951    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:30.511994    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:30.511994    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:30.513795    6592 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:36:31.006568    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:31.006745    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:31.006745    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:31.006745    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:31.018441    6592 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0419 17:36:31.023740    6592 node_ready.go:53] node "ha-095800-m02" has status "Ready":"False"
	I0419 17:36:31.494606    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:31.494838    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:31.494838    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:31.494897    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:31.498401    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:36:31.997171    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:31.997206    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:31.997206    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:31.997206    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:32.001975    6592 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:36:32.510298    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:32.510367    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:32.510401    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:32.510401    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:32.515416    6592 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:36:33.004763    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:33.005020    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:33.005020    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:33.005020    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:33.008890    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:36:33.508603    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:33.508603    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:33.508603    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:33.508603    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:33.671262    6592 round_trippers.go:574] Response Status: 200 OK in 162 milliseconds
	I0419 17:36:33.684260    6592 node_ready.go:53] node "ha-095800-m02" has status "Ready":"False"
	I0419 17:36:34.003928    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:34.003928    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:34.003928    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:34.003928    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:34.039303    6592 round_trippers.go:574] Response Status: 200 OK in 35 milliseconds
	I0419 17:36:34.501746    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:34.501746    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:34.501746    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:34.501746    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:34.509305    6592 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 17:36:34.998216    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:34.998318    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:34.998318    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:34.998318    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:35.003330    6592 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:36:35.508011    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:35.508282    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:35.508282    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:35.508282    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:35.514142    6592 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:36:36.006539    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:36.006539    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:36.006539    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:36.006539    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:36.008482    6592 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:36:36.013454    6592 node_ready.go:53] node "ha-095800-m02" has status "Ready":"False"
	I0419 17:36:36.517783    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:36.517959    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:36.517959    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:36.517959    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:36.518493    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:37.006346    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:37.006346    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:37.006346    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:37.006346    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:37.006727    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:37.517049    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:37.517049    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:37.517139    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:37.517139    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:37.522813    6592 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:36:37.997763    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:37.997763    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:37.997763    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:37.997763    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.003365    6592 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:36:38.003966    6592 node_ready.go:49] node "ha-095800-m02" has status "Ready":"True"
	I0419 17:36:38.004099    6592 node_ready.go:38] duration metric: took 11.0100537s for node "ha-095800-m02" to be "Ready" ...
	I0419 17:36:38.004099    6592 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 17:36:38.004353    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods
	I0419 17:36:38.004353    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.004353    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.004413    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.016671    6592 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0419 17:36:38.030873    6592 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7mk28" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.030873    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7mk28
	I0419 17:36:38.030873    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.030873    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.030873    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.032596    6592 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:36:38.040724    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:38.040830    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.040830    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.040830    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.051702    6592 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0419 17:36:38.053343    6592 pod_ready.go:92] pod "coredns-7db6d8ff4d-7mk28" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:38.053343    6592 pod_ready.go:81] duration metric: took 22.4697ms for pod "coredns-7db6d8ff4d-7mk28" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.053401    6592 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vklb9" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.053524    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vklb9
	I0419 17:36:38.053524    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.053524    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.053582    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.058198    6592 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:36:38.061933    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:38.062058    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.062058    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.062058    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.062286    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:38.068485    6592 pod_ready.go:92] pod "coredns-7db6d8ff4d-vklb9" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:38.068557    6592 pod_ready.go:81] duration metric: took 15.0842ms for pod "coredns-7db6d8ff4d-vklb9" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.068557    6592 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.068629    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-095800
	I0419 17:36:38.068714    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.068714    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.068714    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.073133    6592 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:36:38.073273    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:38.073805    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.073805    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.073847    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.076446    6592 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:36:38.079503    6592 pod_ready.go:92] pod "etcd-ha-095800" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:38.079582    6592 pod_ready.go:81] duration metric: took 11.0251ms for pod "etcd-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.079582    6592 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.079655    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-095800-m02
	I0419 17:36:38.079730    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.079730    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.079730    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.080431    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:38.085921    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:38.085993    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.085993    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.085993    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.089401    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:36:38.090210    6592 pod_ready.go:92] pod "etcd-ha-095800-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:38.090210    6592 pod_ready.go:81] duration metric: took 10.6286ms for pod "etcd-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.090210    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.198670    6592 request.go:629] Waited for 107.6412ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800
	I0419 17:36:38.198872    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800
	I0419 17:36:38.198872    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.198872    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.198872    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.199582    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:38.413161    6592 request.go:629] Waited for 208.2183ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:38.413280    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:38.413280    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.413280    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.413280    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.413660    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:38.419496    6592 pod_ready.go:92] pod "kube-apiserver-ha-095800" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:38.419604    6592 pod_ready.go:81] duration metric: took 328.8362ms for pod "kube-apiserver-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.419604    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.608885    6592 request.go:629] Waited for 188.7472ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800-m02
	I0419 17:36:38.608969    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800-m02
	I0419 17:36:38.608969    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.608969    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.609090    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.615536    6592 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:36:38.809691    6592 request.go:629] Waited for 190.3486ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:38.809941    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:38.810111    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:38.810111    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:38.810111    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:38.810465    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:38.816133    6592 pod_ready.go:92] pod "kube-apiserver-ha-095800-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:38.816204    6592 pod_ready.go:81] duration metric: took 396.5278ms for pod "kube-apiserver-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:38.816204    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:39.012105    6592 request.go:629] Waited for 195.6204ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095800
	I0419 17:36:39.012225    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095800
	I0419 17:36:39.012225    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:39.012225    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:39.012225    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:39.012717    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:39.208236    6592 request.go:629] Waited for 188.8556ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:39.208346    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:39.208420    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:39.208420    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:39.208452    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:39.208865    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:39.214503    6592 pod_ready.go:92] pod "kube-controller-manager-ha-095800" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:39.214503    6592 pod_ready.go:81] duration metric: took 398.298ms for pod "kube-controller-manager-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:39.214503    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:39.407550    6592 request.go:629] Waited for 192.7473ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095800-m02
	I0419 17:36:39.407550    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095800-m02
	I0419 17:36:39.407550    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:39.407550    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:39.407550    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:39.414644    6592 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:36:39.608178    6592 request.go:629] Waited for 192.3441ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:39.608510    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:39.608576    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:39.608612    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:39.608612    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:39.622229    6592 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:36:39.622920    6592 pod_ready.go:92] pod "kube-controller-manager-ha-095800-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:39.622920    6592 pod_ready.go:81] duration metric: took 408.4159ms for pod "kube-controller-manager-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:39.622920    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4nldk" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:39.808013    6592 request.go:629] Waited for 184.7351ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4nldk
	I0419 17:36:39.808346    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4nldk
	I0419 17:36:39.808346    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:39.808427    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:39.808427    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:39.808672    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:40.009217    6592 request.go:629] Waited for 193.4195ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:40.009406    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:40.009406    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:40.009406    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:40.009406    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:40.009759    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:40.015867    6592 pod_ready.go:92] pod "kube-proxy-4nldk" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:40.015867    6592 pod_ready.go:81] duration metric: took 392.9461ms for pod "kube-proxy-4nldk" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:40.015867    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vq826" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:40.203356    6592 request.go:629] Waited for 187.2505ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vq826
	I0419 17:36:40.203572    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vq826
	I0419 17:36:40.203572    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:40.203572    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:40.203572    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:40.214446    6592 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0419 17:36:40.410827    6592 request.go:629] Waited for 192.5621ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:40.411036    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:40.411148    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:40.411182    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:40.411182    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:40.411498    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:40.417793    6592 pod_ready.go:92] pod "kube-proxy-vq826" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:40.418325    6592 pod_ready.go:81] duration metric: took 402.4574ms for pod "kube-proxy-vq826" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:40.418325    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:40.603529    6592 request.go:629] Waited for 184.8265ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095800
	I0419 17:36:40.603670    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095800
	I0419 17:36:40.603821    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:40.603821    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:40.603821    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:40.612277    6592 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 17:36:40.810889    6592 request.go:629] Waited for 196.7316ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:40.811027    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:36:40.811093    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:40.811169    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:40.811169    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:40.812713    6592 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:36:40.817079    6592 pod_ready.go:92] pod "kube-scheduler-ha-095800" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:40.817229    6592 pod_ready.go:81] duration metric: took 398.7525ms for pod "kube-scheduler-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:40.817229    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:41.001845    6592 request.go:629] Waited for 184.5114ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095800-m02
	I0419 17:36:41.002122    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095800-m02
	I0419 17:36:41.002122    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:41.002122    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:41.002122    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:41.002824    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:41.215011    6592 request.go:629] Waited for 206.5742ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:41.215011    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:36:41.215011    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:41.215011    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:41.215011    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:41.220661    6592 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:36:41.222751    6592 pod_ready.go:92] pod "kube-scheduler-ha-095800-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 17:36:41.222751    6592 pod_ready.go:81] duration metric: took 405.5202ms for pod "kube-scheduler-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:36:41.222751    6592 pod_ready.go:38] duration metric: took 3.2185249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 17:36:41.222751    6592 api_server.go:52] waiting for apiserver process to appear ...
	I0419 17:36:41.237304    6592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 17:36:41.277078    6592 api_server.go:72] duration metric: took 14.6588877s to wait for apiserver process to appear ...
	I0419 17:36:41.277130    6592 api_server.go:88] waiting for apiserver healthz status ...
	I0419 17:36:41.277180    6592 api_server.go:253] Checking apiserver healthz at https://172.19.32.218:8443/healthz ...
	I0419 17:36:41.283624    6592 api_server.go:279] https://172.19.32.218:8443/healthz returned 200:
	ok
	I0419 17:36:41.285414    6592 round_trippers.go:463] GET https://172.19.32.218:8443/version
	I0419 17:36:41.285414    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:41.285414    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:41.285414    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:41.285965    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:41.287754    6592 api_server.go:141] control plane version: v1.30.0
	I0419 17:36:41.287839    6592 api_server.go:131] duration metric: took 10.7087ms to wait for apiserver health ...
	I0419 17:36:41.287894    6592 system_pods.go:43] waiting for kube-system pods to appear ...
	I0419 17:36:41.408501    6592 request.go:629] Waited for 120.3424ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods
	I0419 17:36:41.408643    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods
	I0419 17:36:41.408643    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:41.408643    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:41.408643    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:41.409448    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:41.424834    6592 system_pods.go:59] 17 kube-system pods found
	I0419 17:36:41.424834    6592 system_pods.go:61] "coredns-7db6d8ff4d-7mk28" [e9d98fbb-21cc-4618-9709-0b27986c63b1] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "coredns-7db6d8ff4d-vklb9" [a1f46798-9bf9-4abe-9d6d-573902a0d373] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "etcd-ha-095800" [1aaf32fa-58bb-40f3-a162-21259eb4f376] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "etcd-ha-095800-m02" [5b0fc0be-2f86-4758-b8eb-aeb31245afd7] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kindnet-7j4cr" [92ce62b8-71b2-4deb-b295-cf938509a4e5] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kindnet-kpn69" [49ffd8bc-d455-4f64-9822-e2d363df7cc7] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kube-apiserver-ha-095800" [ebaad661-6759-415e-b65f-14d6ffb46853] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kube-apiserver-ha-095800-m02" [99267604-9885-472a-aab9-eda6b150457d] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kube-controller-manager-ha-095800" [dc9b9d64-b78b-44e3-a7f6-26ba6007b6dc] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kube-controller-manager-ha-095800-m02" [534ea924-2ff9-48ec-a02c-ce23e4c47324] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kube-proxy-4nldk" [79c714ec-b6ec-4cff-86fb-f560bed67202] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kube-proxy-vq826" [d2b22474-6974-4cbd-8565-95facc3c817e] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kube-scheduler-ha-095800" [af0f5d53-c6ab-4235-b9a2-ce0a371ff55f] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kube-scheduler-ha-095800-m02" [000d5f12-1c3f-41ba-b0dd-696da8c6b8ad] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kube-vip-ha-095800" [2fe74317-1ff4-4147-ae17-f2f31f4f06ba] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "kube-vip-ha-095800-m02" [e80eec5a-c346-4f90-a843-b6ed2d111f0b] Running
	I0419 17:36:41.424834    6592 system_pods.go:61] "storage-provisioner" [f58269e6-1ef1-442a-972b-cc05662b174c] Running
	I0419 17:36:41.424834    6592 system_pods.go:74] duration metric: took 136.9397ms to wait for pod list to return data ...
	I0419 17:36:41.424834    6592 default_sa.go:34] waiting for default service account to be created ...
	I0419 17:36:41.607603    6592 request.go:629] Waited for 182.7682ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/default/serviceaccounts
	I0419 17:36:41.607603    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/default/serviceaccounts
	I0419 17:36:41.607603    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:41.607603    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:41.607603    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:41.608582    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:41.613379    6592 default_sa.go:45] found service account: "default"
	I0419 17:36:41.613505    6592 default_sa.go:55] duration metric: took 188.6706ms for default service account to be created ...
	I0419 17:36:41.613505    6592 system_pods.go:116] waiting for k8s-apps to be running ...
	I0419 17:36:41.822990    6592 request.go:629] Waited for 209.2696ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods
	I0419 17:36:41.823080    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods
	I0419 17:36:41.823080    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:41.823080    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:41.823080    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:41.830119    6592 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 17:36:41.839008    6592 system_pods.go:86] 17 kube-system pods found
	I0419 17:36:41.839586    6592 system_pods.go:89] "coredns-7db6d8ff4d-7mk28" [e9d98fbb-21cc-4618-9709-0b27986c63b1] Running
	I0419 17:36:41.839586    6592 system_pods.go:89] "coredns-7db6d8ff4d-vklb9" [a1f46798-9bf9-4abe-9d6d-573902a0d373] Running
	I0419 17:36:41.839586    6592 system_pods.go:89] "etcd-ha-095800" [1aaf32fa-58bb-40f3-a162-21259eb4f376] Running
	I0419 17:36:41.839586    6592 system_pods.go:89] "etcd-ha-095800-m02" [5b0fc0be-2f86-4758-b8eb-aeb31245afd7] Running
	I0419 17:36:41.839586    6592 system_pods.go:89] "kindnet-7j4cr" [92ce62b8-71b2-4deb-b295-cf938509a4e5] Running
	I0419 17:36:41.839586    6592 system_pods.go:89] "kindnet-kpn69" [49ffd8bc-d455-4f64-9822-e2d363df7cc7] Running
	I0419 17:36:41.839586    6592 system_pods.go:89] "kube-apiserver-ha-095800" [ebaad661-6759-415e-b65f-14d6ffb46853] Running
	I0419 17:36:41.839716    6592 system_pods.go:89] "kube-apiserver-ha-095800-m02" [99267604-9885-472a-aab9-eda6b150457d] Running
	I0419 17:36:41.839716    6592 system_pods.go:89] "kube-controller-manager-ha-095800" [dc9b9d64-b78b-44e3-a7f6-26ba6007b6dc] Running
	I0419 17:36:41.839716    6592 system_pods.go:89] "kube-controller-manager-ha-095800-m02" [534ea924-2ff9-48ec-a02c-ce23e4c47324] Running
	I0419 17:36:41.839766    6592 system_pods.go:89] "kube-proxy-4nldk" [79c714ec-b6ec-4cff-86fb-f560bed67202] Running
	I0419 17:36:41.839807    6592 system_pods.go:89] "kube-proxy-vq826" [d2b22474-6974-4cbd-8565-95facc3c817e] Running
	I0419 17:36:41.839807    6592 system_pods.go:89] "kube-scheduler-ha-095800" [af0f5d53-c6ab-4235-b9a2-ce0a371ff55f] Running
	I0419 17:36:41.839807    6592 system_pods.go:89] "kube-scheduler-ha-095800-m02" [000d5f12-1c3f-41ba-b0dd-696da8c6b8ad] Running
	I0419 17:36:41.839807    6592 system_pods.go:89] "kube-vip-ha-095800" [2fe74317-1ff4-4147-ae17-f2f31f4f06ba] Running
	I0419 17:36:41.839807    6592 system_pods.go:89] "kube-vip-ha-095800-m02" [e80eec5a-c346-4f90-a843-b6ed2d111f0b] Running
	I0419 17:36:41.839807    6592 system_pods.go:89] "storage-provisioner" [f58269e6-1ef1-442a-972b-cc05662b174c] Running
	I0419 17:36:41.839807    6592 system_pods.go:126] duration metric: took 226.3017ms to wait for k8s-apps to be running ...
	I0419 17:36:41.839807    6592 system_svc.go:44] waiting for kubelet service to be running ....
	I0419 17:36:41.848433    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 17:36:41.874746    6592 system_svc.go:56] duration metric: took 34.9388ms WaitForService to wait for kubelet
	I0419 17:36:41.874746    6592 kubeadm.go:576] duration metric: took 15.2566056s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 17:36:41.874746    6592 node_conditions.go:102] verifying NodePressure condition ...
	I0419 17:36:41.997978    6592 request.go:629] Waited for 123.0419ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes
	I0419 17:36:41.998034    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes
	I0419 17:36:41.998034    6592 round_trippers.go:469] Request Headers:
	I0419 17:36:41.998034    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:36:41.998034    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:36:41.998565    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:36:42.004110    6592 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 17:36:42.004208    6592 node_conditions.go:123] node cpu capacity is 2
	I0419 17:36:42.004243    6592 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 17:36:42.004243    6592 node_conditions.go:123] node cpu capacity is 2
	I0419 17:36:42.004243    6592 node_conditions.go:105] duration metric: took 129.4965ms to run NodePressure ...
	I0419 17:36:42.004298    6592 start.go:240] waiting for startup goroutines ...
	I0419 17:36:42.004323    6592 start.go:254] writing updated cluster config ...
	I0419 17:36:42.008503    6592 out.go:177] 
	I0419 17:36:42.020747    6592 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:36:42.023505    6592 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json ...
	I0419 17:36:42.029464    6592 out.go:177] * Starting "ha-095800-m03" control-plane node in "ha-095800" cluster
	I0419 17:36:42.032504    6592 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 17:36:42.032646    6592 cache.go:56] Caching tarball of preloaded images
	I0419 17:36:42.032646    6592 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0419 17:36:42.033190    6592 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 17:36:42.033286    6592 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json ...
	I0419 17:36:42.037601    6592 start.go:360] acquireMachinesLock for ha-095800-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 17:36:42.039479    6592 start.go:364] duration metric: took 1.8778ms to acquireMachinesLock for "ha-095800-m03"
	I0419 17:36:42.039479    6592 start.go:93] Provisioning new machine with config: &{Name:ha-095800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:def
ault APIServerHAVIP:172.19.47.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.32.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.39.106 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false is
tio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 17:36:42.039479    6592 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0419 17:36:42.040643    6592 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 17:36:42.040643    6592 start.go:159] libmachine.API.Create for "ha-095800" (driver="hyperv")
	I0419 17:36:42.040643    6592 client.go:168] LocalClient.Create starting
	I0419 17:36:42.040643    6592 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0419 17:36:42.046431    6592 main.go:141] libmachine: Decoding PEM data...
	I0419 17:36:42.046431    6592 main.go:141] libmachine: Parsing certificate...
	I0419 17:36:42.046763    6592 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0419 17:36:42.047009    6592 main.go:141] libmachine: Decoding PEM data...
	I0419 17:36:42.047009    6592 main.go:141] libmachine: Parsing certificate...
	I0419 17:36:42.047135    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0419 17:36:43.932368    6592 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0419 17:36:43.932368    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:36:43.932561    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0419 17:36:45.687039    6592 main.go:141] libmachine: [stdout =====>] : False
	
	I0419 17:36:45.687039    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:36:45.687039    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0419 17:36:47.203723    6592 main.go:141] libmachine: [stdout =====>] : True
	
	I0419 17:36:47.211508    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:36:47.211508    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0419 17:36:50.951966    6592 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0419 17:36:50.964727    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:36:50.967214    6592 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0419 17:36:51.467126    6592 main.go:141] libmachine: Creating SSH key...
	I0419 17:36:51.639067    6592 main.go:141] libmachine: Creating VM...
	I0419 17:36:51.639499    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0419 17:36:54.554035    6592 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0419 17:36:54.565289    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:36:54.565289    6592 main.go:141] libmachine: Using switch "Default Switch"
	I0419 17:36:54.565462    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0419 17:36:56.328115    6592 main.go:141] libmachine: [stdout =====>] : True
	
	I0419 17:36:56.328115    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:36:56.337140    6592 main.go:141] libmachine: Creating VHD
	I0419 17:36:56.337140    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0419 17:36:59.947234    6592 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : DAD6BA86-FF7C-4654-8EED-887E8261B451
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0419 17:36:59.947234    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:36:59.947234    6592 main.go:141] libmachine: Writing magic tar header
	I0419 17:36:59.947234    6592 main.go:141] libmachine: Writing SSH key tar header
	I0419 17:36:59.955571    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0419 17:37:03.033601    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:03.033697    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:03.033697    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\disk.vhd' -SizeBytes 20000MB
	I0419 17:37:05.489037    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:05.489037    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:05.500648    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-095800-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0419 17:37:09.042127    6592 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-095800-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0419 17:37:09.042127    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:09.054998    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-095800-m03 -DynamicMemoryEnabled $false
	I0419 17:37:11.240904    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:11.240904    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:11.252370    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-095800-m03 -Count 2
	I0419 17:37:13.436540    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:13.436627    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:13.436627    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-095800-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\boot2docker.iso'
	I0419 17:37:15.921177    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:15.921177    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:15.932785    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-095800-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\disk.vhd'
	I0419 17:37:18.552415    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:18.552415    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:18.552415    6592 main.go:141] libmachine: Starting VM...
	I0419 17:37:18.554115    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-095800-m03
	I0419 17:37:21.570087    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:21.570087    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:21.570087    6592 main.go:141] libmachine: Waiting for host to start...
	I0419 17:37:21.582998    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:37:23.769670    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:37:23.769670    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:23.776327    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:37:26.236477    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:26.248019    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:27.251453    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:37:29.397765    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:37:29.409166    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:29.409166    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:37:31.927720    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:31.927720    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:32.935094    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:37:35.056916    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:37:35.056916    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:35.057406    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:37:37.526989    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:37.528182    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:38.542309    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:37:40.664594    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:37:40.664594    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:40.664936    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:37:43.135114    6592 main.go:141] libmachine: [stdout =====>] : 
	I0419 17:37:43.135114    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:44.160003    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:37:46.282915    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:37:46.287876    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:46.287876    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:37:48.812542    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:37:48.812634    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:48.812634    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:37:50.832585    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:37:50.832585    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:50.844748    6592 machine.go:94] provisionDockerMachine start ...
	I0419 17:37:50.844852    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:37:52.948535    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:37:52.948535    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:52.948676    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:37:55.469644    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:37:55.469644    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:55.483878    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:37:55.491497    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.152 22 <nil> <nil>}
	I0419 17:37:55.491497    6592 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 17:37:55.641932    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0419 17:37:55.641932    6592 buildroot.go:166] provisioning hostname "ha-095800-m03"
	I0419 17:37:55.642038    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:37:57.638660    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:37:57.638660    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:37:57.650318    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:00.126300    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:00.141381    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:00.148794    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:38:00.149323    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.152 22 <nil> <nil>}
	I0419 17:38:00.149323    6592 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-095800-m03 && echo "ha-095800-m03" | sudo tee /etc/hostname
	I0419 17:38:00.316085    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-095800-m03
	
	I0419 17:38:00.316203    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:02.358346    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:02.358346    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:02.358616    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:04.852449    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:04.852449    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:04.859428    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:38:04.860096    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.152 22 <nil> <nil>}
	I0419 17:38:04.860096    6592 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-095800-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-095800-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-095800-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 17:38:05.016232    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 17:38:05.016338    6592 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0419 17:38:05.016338    6592 buildroot.go:174] setting up certificates
	I0419 17:38:05.016441    6592 provision.go:84] configureAuth start
	I0419 17:38:05.016441    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:07.058195    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:07.070447    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:07.070447    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:09.545586    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:09.551380    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:09.551380    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:11.633519    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:11.633519    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:11.633519    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:14.185339    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:14.197210    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:14.197210    6592 provision.go:143] copyHostCerts
	I0419 17:38:14.197504    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0419 17:38:14.197957    6592 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0419 17:38:14.198076    6592 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0419 17:38:14.198584    6592 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0419 17:38:14.200260    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0419 17:38:14.200803    6592 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0419 17:38:14.201203    6592 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0419 17:38:14.201713    6592 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0419 17:38:14.203486    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0419 17:38:14.203882    6592 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0419 17:38:14.203882    6592 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0419 17:38:14.204519    6592 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0419 17:38:14.205446    6592 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-095800-m03 san=[127.0.0.1 172.19.47.152 ha-095800-m03 localhost minikube]
	I0419 17:38:14.367604    6592 provision.go:177] copyRemoteCerts
	I0419 17:38:14.389720    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 17:38:14.390006    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:16.418777    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:16.418777    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:16.418777    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:18.926535    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:18.938842    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:18.939168    6592 sshutil.go:53] new ssh client: &{IP:172.19.47.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\id_rsa Username:docker}
	I0419 17:38:19.052396    6592 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6626128s)
	I0419 17:38:19.052507    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0419 17:38:19.053004    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0419 17:38:19.102195    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0419 17:38:19.102770    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0419 17:38:19.152551    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0419 17:38:19.153168    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0419 17:38:19.199380    6592 provision.go:87] duration metric: took 14.1828613s to configureAuth
	I0419 17:38:19.199446    6592 buildroot.go:189] setting minikube options for container-runtime
	I0419 17:38:19.199681    6592 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:38:19.199681    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:21.279616    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:21.279616    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:21.287066    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:23.815497    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:23.815497    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:23.821891    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:38:23.822544    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.152 22 <nil> <nil>}
	I0419 17:38:23.822544    6592 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0419 17:38:23.967507    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0419 17:38:23.967642    6592 buildroot.go:70] root file system type: tmpfs
	I0419 17:38:23.967845    6592 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0419 17:38:23.967845    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:26.030688    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:26.030856    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:26.030944    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:28.496213    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:28.496213    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:28.515064    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:38:28.515064    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.152 22 <nil> <nil>}
	I0419 17:38:28.515064    6592 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.32.218"
	Environment="NO_PROXY=172.19.32.218,172.19.39.106"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0419 17:38:28.685193    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.32.218
	Environment=NO_PROXY=172.19.32.218,172.19.39.106
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0419 17:38:28.685321    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:30.749145    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:30.749145    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:30.749351    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:33.236418    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:33.248856    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:33.256122    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:38:33.256927    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.152 22 <nil> <nil>}
	I0419 17:38:33.256995    6592 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0419 17:38:35.396561    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0419 17:38:35.396561    6592 machine.go:97] duration metric: took 44.5517062s to provisionDockerMachine
	I0419 17:38:35.396561    6592 client.go:171] duration metric: took 1m53.3556453s to LocalClient.Create
	I0419 17:38:35.397119    6592 start.go:167] duration metric: took 1m53.3562038s to libmachine.API.Create "ha-095800"
	I0419 17:38:35.397188    6592 start.go:293] postStartSetup for "ha-095800-m03" (driver="hyperv")
	I0419 17:38:35.397188    6592 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 17:38:35.411546    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 17:38:35.411546    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:37.453290    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:37.453290    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:37.465128    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:39.917137    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:39.917137    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:39.928497    6592 sshutil.go:53] new ssh client: &{IP:172.19.47.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\id_rsa Username:docker}
	I0419 17:38:40.057982    6592 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6464251s)
	I0419 17:38:40.074420    6592 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 17:38:40.082216    6592 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 17:38:40.082216    6592 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0419 17:38:40.082888    6592 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0419 17:38:40.083948    6592 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> 34162.pem in /etc/ssl/certs
	I0419 17:38:40.083948    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /etc/ssl/certs/34162.pem
	I0419 17:38:40.095395    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 17:38:40.116711    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /etc/ssl/certs/34162.pem (1708 bytes)
	I0419 17:38:40.165058    6592 start.go:296] duration metric: took 4.7678579s for postStartSetup
	I0419 17:38:40.168238    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:42.185258    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:42.185474    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:42.185474    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:44.649522    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:44.649522    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:44.649902    6592 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\config.json ...
	I0419 17:38:44.652645    6592 start.go:128] duration metric: took 2m2.6128718s to createHost
	I0419 17:38:44.652743    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:46.664593    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:46.675511    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:46.675511    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:49.156680    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:49.156680    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:49.162909    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:38:49.163518    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.152 22 <nil> <nil>}
	I0419 17:38:49.163563    6592 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 17:38:49.298602    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713573529.295299619
	
	I0419 17:38:49.298602    6592 fix.go:216] guest clock: 1713573529.295299619
	I0419 17:38:49.298602    6592 fix.go:229] Guest: 2024-04-19 17:38:49.295299619 -0700 PDT Remote: 2024-04-19 17:38:44.6526452 -0700 PDT m=+551.019393501 (delta=4.642654419s)
	I0419 17:38:49.298602    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:51.293675    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:51.293794    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:51.293794    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:53.737513    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:53.737513    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:53.754826    6592 main.go:141] libmachine: Using SSH client type: native
	I0419 17:38:53.755448    6592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.152 22 <nil> <nil>}
	I0419 17:38:53.755448    6592 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713573529
	I0419 17:38:53.913899    6592 main.go:141] libmachine: SSH cmd err, output: <nil>: Sat Apr 20 00:38:49 UTC 2024
	
	I0419 17:38:53.914022    6592 fix.go:236] clock set: Sat Apr 20 00:38:49 UTC 2024
	 (err=<nil>)
	I0419 17:38:53.914022    6592 start.go:83] releasing machines lock for "ha-095800-m03", held for 2m11.8742264s
	I0419 17:38:53.914208    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:55.944490    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:38:55.944490    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:55.944490    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:38:58.428942    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:38:58.428942    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:38:58.431539    6592 out.go:177] * Found network options:
	I0419 17:38:58.434241    6592 out.go:177]   - NO_PROXY=172.19.32.218,172.19.39.106
	W0419 17:38:58.434433    6592 proxy.go:119] fail to check proxy env: Error ip not in block
	W0419 17:38:58.434433    6592 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 17:38:58.439069    6592 out.go:177]   - NO_PROXY=172.19.32.218,172.19.39.106
	W0419 17:38:58.444061    6592 proxy.go:119] fail to check proxy env: Error ip not in block
	W0419 17:38:58.444061    6592 proxy.go:119] fail to check proxy env: Error ip not in block
	W0419 17:38:58.445233    6592 proxy.go:119] fail to check proxy env: Error ip not in block
	W0419 17:38:58.445233    6592 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 17:38:58.446928    6592 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 17:38:58.446928    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:38:58.452204    6592 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0419 17:38:58.452204    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:39:00.570988    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:39:00.570988    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:39:00.571108    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:39:00.571108    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:39:00.571108    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:39:00.571108    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:39:03.123020    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:39:03.123147    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:39:03.123379    6592 sshutil.go:53] new ssh client: &{IP:172.19.47.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\id_rsa Username:docker}
	I0419 17:39:03.179727    6592 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:39:03.181234    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:39:03.181478    6592 sshutil.go:53] new ssh client: &{IP:172.19.47.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\id_rsa Username:docker}
	I0419 17:39:03.222672    6592 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7703898s)
	W0419 17:39:03.222742    6592 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 17:39:03.237311    6592 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 17:39:03.347132    6592 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 17:39:03.347132    6592 start.go:494] detecting cgroup driver to use...
	I0419 17:39:03.347132    6592 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.900192s)
	I0419 17:39:03.347132    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 17:39:03.397019    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0419 17:39:03.435054    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0419 17:39:03.457045    6592 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0419 17:39:03.470340    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0419 17:39:03.503878    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 17:39:03.543272    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0419 17:39:03.577110    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 17:39:03.612714    6592 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 17:39:03.650344    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0419 17:39:03.679904    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0419 17:39:03.717614    6592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0419 17:39:03.762410    6592 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 17:39:03.794589    6592 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 17:39:03.827744    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:39:04.022133    6592 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0419 17:39:04.042164    6592 start.go:494] detecting cgroup driver to use...
	I0419 17:39:04.073695    6592 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0419 17:39:04.108778    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 17:39:04.146635    6592 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 17:39:04.196087    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 17:39:04.237093    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 17:39:04.274071    6592 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0419 17:39:04.337576    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 17:39:04.364266    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 17:39:04.416875    6592 ssh_runner.go:195] Run: which cri-dockerd
	I0419 17:39:04.436691    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0419 17:39:04.454448    6592 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0419 17:39:04.497202    6592 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0419 17:39:04.697669    6592 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0419 17:39:04.891377    6592 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0419 17:39:04.891377    6592 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0419 17:39:04.945548    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:39:05.159643    6592 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 17:39:07.689312    6592 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5296124s)
	I0419 17:39:07.703394    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0419 17:39:07.745691    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 17:39:07.783047    6592 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0419 17:39:07.990577    6592 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0419 17:39:08.193850    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:39:08.394421    6592 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0419 17:39:08.438171    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 17:39:08.477581    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:39:08.676485    6592 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0419 17:39:08.785823    6592 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0419 17:39:08.801526    6592 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0419 17:39:08.813074    6592 start.go:562] Will wait 60s for crictl version
	I0419 17:39:08.826354    6592 ssh_runner.go:195] Run: which crictl
	I0419 17:39:08.845207    6592 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 17:39:08.902855    6592 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0419 17:39:08.914068    6592 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 17:39:08.958593    6592 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 17:39:08.992203    6592 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0419 17:39:08.994755    6592 out.go:177]   - env NO_PROXY=172.19.32.218
	I0419 17:39:08.997378    6592 out.go:177]   - env NO_PROXY=172.19.32.218,172.19.39.106
	I0419 17:39:08.998835    6592 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0419 17:39:09.001576    6592 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0419 17:39:09.001576    6592 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0419 17:39:09.001576    6592 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0419 17:39:09.001576    6592 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8c:b9:25 Flags:up|broadcast|multicast|running}
	I0419 17:39:09.006993    6592 ip.go:210] interface addr: fe80::ce04:318e:a1d8:4460/64
	I0419 17:39:09.006993    6592 ip.go:210] interface addr: 172.19.32.1/20
	I0419 17:39:09.018531    6592 ssh_runner.go:195] Run: grep 172.19.32.1	host.minikube.internal$ /etc/hosts
	I0419 17:39:09.025883    6592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.32.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 17:39:09.053323    6592 mustload.go:65] Loading cluster: ha-095800
	I0419 17:39:09.054137    6592 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:39:09.054235    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:39:11.118523    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:39:11.118639    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:39:11.118639    6592 host.go:66] Checking if "ha-095800" exists ...
	I0419 17:39:11.119458    6592 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800 for IP: 172.19.47.152
	I0419 17:39:11.119458    6592 certs.go:194] generating shared ca certs ...
	I0419 17:39:11.119458    6592 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:39:11.120259    6592 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0419 17:39:11.120259    6592 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0419 17:39:11.120259    6592 certs.go:256] generating profile certs ...
	I0419 17:39:11.121626    6592 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\client.key
	I0419 17:39:11.121853    6592 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.ef4167b0
	I0419 17:39:11.121982    6592 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.ef4167b0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.32.218 172.19.39.106 172.19.47.152 172.19.47.254]
	I0419 17:39:11.213754    6592 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.ef4167b0 ...
	I0419 17:39:11.213754    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.ef4167b0: {Name:mk764ccec1a095eae423822d018e7356d3a6c394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:39:11.216559    6592 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.ef4167b0 ...
	I0419 17:39:11.216559    6592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.ef4167b0: {Name:mkaa0fbf04b32aade596377c008e33461f7877fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 17:39:11.217442    6592 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt.ef4167b0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt
	I0419 17:39:11.224115    6592 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key.ef4167b0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key
	I0419 17:39:11.230756    6592 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key
	I0419 17:39:11.230756    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 17:39:11.230756    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0419 17:39:11.232684    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 17:39:11.232937    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 17:39:11.232937    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0419 17:39:11.233261    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0419 17:39:11.233446    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0419 17:39:11.233446    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0419 17:39:11.233446    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem (1338 bytes)
	W0419 17:39:11.234755    6592 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416_empty.pem, impossibly tiny 0 bytes
	I0419 17:39:11.234755    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0419 17:39:11.235306    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0419 17:39:11.235621    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0419 17:39:11.235976    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0419 17:39:11.236203    6592 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem (1708 bytes)
	I0419 17:39:11.236690    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /usr/share/ca-certificates/34162.pem
	I0419 17:39:11.236896    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:39:11.237111    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem -> /usr/share/ca-certificates/3416.pem
	I0419 17:39:11.237289    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:39:13.291863    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:39:13.291863    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:39:13.291863    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:39:15.825166    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:39:15.825166    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:39:15.825293    6592 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:39:15.945571    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0419 17:39:15.955555    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0419 17:39:15.996290    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0419 17:39:16.007312    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0419 17:39:16.043327    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0419 17:39:16.053007    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0419 17:39:16.083976    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0419 17:39:16.094316    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0419 17:39:16.133200    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0419 17:39:16.143100    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0419 17:39:16.176803    6592 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0419 17:39:16.185991    6592 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0419 17:39:16.206974    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 17:39:16.255804    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 17:39:16.306403    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 17:39:16.357156    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 17:39:16.403469    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0419 17:39:16.449096    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0419 17:39:16.496831    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 17:39:16.543353    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-095800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0419 17:39:16.597157    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /usr/share/ca-certificates/34162.pem (1708 bytes)
	I0419 17:39:16.643894    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 17:39:16.694387    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem --> /usr/share/ca-certificates/3416.pem (1338 bytes)
	I0419 17:39:16.739429    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0419 17:39:16.771610    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0419 17:39:16.804794    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0419 17:39:16.837960    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0419 17:39:16.875078    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0419 17:39:16.904553    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0419 17:39:16.946580    6592 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0419 17:39:16.992453    6592 ssh_runner.go:195] Run: openssl version
	I0419 17:39:17.018985    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34162.pem && ln -fs /usr/share/ca-certificates/34162.pem /etc/ssl/certs/34162.pem"
	I0419 17:39:17.057785    6592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34162.pem
	I0419 17:39:17.066127    6592 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:10 /usr/share/ca-certificates/34162.pem
	I0419 17:39:17.077350    6592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34162.pem
	I0419 17:39:17.102518    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34162.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 17:39:17.141032    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 17:39:17.175317    6592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:39:17.183890    6592 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 20 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:39:17.197519    6592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 17:39:17.218226    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 17:39:17.258654    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3416.pem && ln -fs /usr/share/ca-certificates/3416.pem /etc/ssl/certs/3416.pem"
	I0419 17:39:17.295567    6592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3416.pem
	I0419 17:39:17.302678    6592 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:10 /usr/share/ca-certificates/3416.pem
	I0419 17:39:17.314421    6592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3416.pem
	I0419 17:39:17.338964    6592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3416.pem /etc/ssl/certs/51391683.0"
	I0419 17:39:17.374864    6592 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 17:39:17.381271    6592 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 17:39:17.381271    6592 kubeadm.go:928] updating node {m03 172.19.47.152 8443 v1.30.0 docker true true} ...
	I0419 17:39:17.381907    6592 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-095800-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.47.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:default APIServerHAVIP:172.19.47.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 17:39:17.381978    6592 kube-vip.go:111] generating kube-vip config ...
	I0419 17:39:17.394978    6592 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0419 17:39:17.421717    6592 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0419 17:39:17.421881    6592 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.47.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0419 17:39:17.434497    6592 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 17:39:17.456304    6592 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0419 17:39:17.467349    6592 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0419 17:39:17.494426    6592 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0419 17:39:17.494426    6592 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0419 17:39:17.494426    6592 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0419 17:39:17.494426    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0419 17:39:17.494972    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0419 17:39:17.510020    6592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0419 17:39:17.513542    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 17:39:17.513542    6592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0419 17:39:17.523702    6592 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0419 17:39:17.523702    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0419 17:39:17.567744    6592 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0419 17:39:17.567744    6592 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0419 17:39:17.567744    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0419 17:39:17.594374    6592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0419 17:39:17.638059    6592 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0419 17:39:17.638290    6592 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0419 17:39:18.819259    6592 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0419 17:39:18.901831    6592 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0419 17:39:18.934888    6592 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 17:39:18.971838    6592 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
	I0419 17:39:19.022963    6592 ssh_runner.go:195] Run: grep 172.19.47.254	control-plane.minikube.internal$ /etc/hosts
	I0419 17:39:19.029935    6592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.47.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 17:39:19.066565    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:39:19.277218    6592 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 17:39:19.309649    6592 host.go:66] Checking if "ha-095800" exists ...
	I0419 17:39:19.310691    6592 start.go:316] joinCluster: &{Name:ha-095800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-095800 Namespace:default APIServerHAVIP:172.
19.47.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.32.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.39.106 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.19.47.152 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisi
oner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 17:39:19.310917    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0419 17:39:19.310976    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:39:21.350045    6592 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:39:21.350045    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:39:21.350045    6592 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:39:23.917285    6592 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:39:23.917285    6592 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:39:23.917680    6592 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:39:24.132257    6592 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.8212639s)
	I0419 17:39:24.132314    6592 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.19.47.152 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 17:39:24.132440    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cfxogg.84yr6zh5qlpcbk7r --discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-095800-m03 --control-plane --apiserver-advertise-address=172.19.47.152 --apiserver-bind-port=8443"
	I0419 17:40:09.001560    6592 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cfxogg.84yr6zh5qlpcbk7r --discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-095800-m03 --control-plane --apiserver-advertise-address=172.19.47.152 --apiserver-bind-port=8443": (44.8689706s)
	I0419 17:40:09.001670    6592 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0419 17:40:09.939583    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-095800-m03 minikube.k8s.io/updated_at=2024_04_19T17_40_09_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=ha-095800 minikube.k8s.io/primary=false
	I0419 17:40:10.112579    6592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-095800-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0419 17:40:10.304761    6592 start.go:318] duration metric: took 50.9939474s to joinCluster
	I0419 17:40:10.304761    6592 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.19.47.152 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 17:40:10.307995    6592 out.go:177] * Verifying Kubernetes components...
	I0419 17:40:10.305764    6592 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:40:10.323961    6592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 17:40:10.656939    6592 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 17:40:10.695868    6592 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 17:40:10.696716    6592 kapi.go:59] client config for ha-095800: &rest.Config{Host:"https://172.19.47.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-095800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-095800\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c35620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0419 17:40:10.696843    6592 kubeadm.go:477] Overriding stale ClientConfig host https://172.19.47.254:8443 with https://172.19.32.218:8443
	I0419 17:40:10.697071    6592 node_ready.go:35] waiting up to 6m0s for node "ha-095800-m03" to be "Ready" ...
	I0419 17:40:10.697712    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:10.697712    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:10.697769    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:10.697769    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:10.713146    6592 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0419 17:40:11.213892    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:11.213892    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:11.213892    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:11.213892    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:11.220099    6592 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:40:11.698788    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:11.698788    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:11.699205    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:11.699205    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:11.702127    6592 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:40:12.208561    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:12.208632    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:12.208632    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:12.208632    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:12.213630    6592 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:40:12.710875    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:12.710918    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:12.710956    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:12.710956    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:12.714370    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:40:12.717006    6592 node_ready.go:53] node "ha-095800-m03" has status "Ready":"False"
	I0419 17:40:13.205233    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:13.205368    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:13.205368    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:13.205368    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:13.211711    6592 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:40:13.711288    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:13.711345    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:13.711345    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:13.711345    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:13.711715    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:14.212761    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:14.212831    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:14.212882    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:14.212882    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:14.218576    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:40:14.713172    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:14.713345    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:14.713345    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:14.713345    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:14.715843    6592 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:40:14.718765    6592 node_ready.go:53] node "ha-095800-m03" has status "Ready":"False"
	I0419 17:40:15.212460    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:15.212673    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:15.212673    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:15.212673    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:15.216665    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:40:15.702162    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:15.702162    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:15.702162    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:15.702162    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:15.702856    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:16.208060    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:16.208060    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:16.208060    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:16.208060    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:16.211086    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:40:16.702963    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:16.703060    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:16.703060    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:16.703060    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:17.097940    6592 round_trippers.go:574] Response Status: 200 OK in 394 milliseconds
	I0419 17:40:17.098803    6592 node_ready.go:53] node "ha-095800-m03" has status "Ready":"False"
	I0419 17:40:17.216857    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:17.216939    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:17.216939    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:17.216988    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:17.225131    6592 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 17:40:17.704708    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:17.704708    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:17.704708    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:17.704865    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:17.705283    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:18.202942    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:18.203165    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:18.203165    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:18.203165    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:18.208674    6592 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:40:18.702647    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:18.702647    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:18.702729    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:18.702729    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:18.707053    6592 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:40:19.211521    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:19.211822    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.211822    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.211822    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.216754    6592 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:40:19.217679    6592 node_ready.go:49] node "ha-095800-m03" has status "Ready":"True"
	I0419 17:40:19.217679    6592 node_ready.go:38] duration metric: took 8.5205877s for node "ha-095800-m03" to be "Ready" ...
	I0419 17:40:19.217762    6592 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 17:40:19.217851    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods
	I0419 17:40:19.217851    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.217851    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.217851    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.227736    6592 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0419 17:40:19.240257    6592 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7mk28" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.240475    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7mk28
	I0419 17:40:19.240504    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.240504    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.240504    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.241090    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:19.246642    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:19.246701    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.246749    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.246749    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.250264    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:40:19.251590    6592 pod_ready.go:92] pod "coredns-7db6d8ff4d-7mk28" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:19.251590    6592 pod_ready.go:81] duration metric: took 11.242ms for pod "coredns-7db6d8ff4d-7mk28" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.251590    6592 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vklb9" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.251590    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vklb9
	I0419 17:40:19.251590    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.251590    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.251590    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.252978    6592 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:40:19.256787    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:19.256787    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.256787    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.256787    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.260956    6592 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:40:19.261078    6592 pod_ready.go:92] pod "coredns-7db6d8ff4d-vklb9" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:19.262089    6592 pod_ready.go:81] duration metric: took 10.4983ms for pod "coredns-7db6d8ff4d-vklb9" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.262089    6592 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.262089    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-095800
	I0419 17:40:19.262089    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.262089    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.262089    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.269434    6592 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 17:40:19.270652    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:19.270652    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.270652    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.270652    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.275098    6592 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:40:19.275322    6592 pod_ready.go:92] pod "etcd-ha-095800" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:19.275909    6592 pod_ready.go:81] duration metric: took 13.8208ms for pod "etcd-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.275909    6592 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.275909    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-095800-m02
	I0419 17:40:19.275909    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.275909    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.275909    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.280885    6592 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:40:19.281748    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:40:19.281937    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.281937    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.281937    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.285680    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:40:19.287587    6592 pod_ready.go:92] pod "etcd-ha-095800-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:19.287620    6592 pod_ready.go:81] duration metric: took 11.7103ms for pod "etcd-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.287620    6592 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-095800-m03" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.413731    6592 request.go:629] Waited for 125.937ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-095800-m03
	I0419 17:40:19.413967    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-095800-m03
	I0419 17:40:19.414042    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.414068    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.414068    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.417711    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:40:19.619930    6592 request.go:629] Waited for 198.0891ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:19.620022    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:19.620022    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.620022    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.620022    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.621830    6592 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 17:40:19.621830    6592 pod_ready.go:92] pod "etcd-ha-095800-m03" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:19.621830    6592 pod_ready.go:81] duration metric: took 334.2095ms for pod "etcd-ha-095800-m03" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.621830    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:19.820222    6592 request.go:629] Waited for 198.3916ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800
	I0419 17:40:19.820367    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800
	I0419 17:40:19.820367    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:19.820367    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:19.820367    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:19.821048    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:20.015370    6592 request.go:629] Waited for 187.9931ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:20.015658    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:20.015658    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:20.015695    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:20.015695    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:20.016065    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:20.021653    6592 pod_ready.go:92] pod "kube-apiserver-ha-095800" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:20.021653    6592 pod_ready.go:81] duration metric: took 399.8221ms for pod "kube-apiserver-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:20.021734    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:20.238826    6592 request.go:629] Waited for 216.8856ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800-m02
	I0419 17:40:20.238826    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800-m02
	I0419 17:40:20.238826    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:20.238826    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:20.238826    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:20.246211    6592 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 17:40:20.425566    6592 request.go:629] Waited for 177.8182ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:40:20.425747    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:40:20.425747    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:20.425747    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:20.425747    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:20.426530    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:20.432172    6592 pod_ready.go:92] pod "kube-apiserver-ha-095800-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:20.432236    6592 pod_ready.go:81] duration metric: took 410.501ms for pod "kube-apiserver-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:20.432312    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-095800-m03" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:20.618720    6592 request.go:629] Waited for 186.0776ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800-m03
	I0419 17:40:20.619042    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800-m03
	I0419 17:40:20.619137    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:20.619137    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:20.619137    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:20.619888    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:20.812375    6592 request.go:629] Waited for 186.6694ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:20.812555    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:20.812555    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:20.812676    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:20.812676    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:20.813332    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:21.025836    6592 request.go:629] Waited for 69.949ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800-m03
	I0419 17:40:21.026187    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800-m03
	I0419 17:40:21.026187    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:21.026187    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:21.026187    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:21.032496    6592 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:40:21.226179    6592 request.go:629] Waited for 192.7513ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:21.226179    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:21.226179    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:21.226179    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:21.226179    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:21.226720    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:21.448697    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800-m03
	I0419 17:40:21.448697    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:21.448697    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:21.448697    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:21.449253    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:21.629236    6592 request.go:629] Waited for 170.9525ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:21.629380    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:21.629423    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:21.629463    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:21.629463    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:21.635409    6592 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 17:40:21.944000    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095800-m03
	I0419 17:40:21.944090    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:21.944090    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:21.944090    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:21.944331    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:22.019189    6592 request.go:629] Waited for 74.7797ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:22.019284    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:22.019284    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:22.019284    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:22.019284    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:22.019555    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:22.025096    6592 pod_ready.go:92] pod "kube-apiserver-ha-095800-m03" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:22.025096    6592 pod_ready.go:81] duration metric: took 1.5927794s for pod "kube-apiserver-ha-095800-m03" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:22.025096    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:22.218615    6592 request.go:629] Waited for 193.1446ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095800
	I0419 17:40:22.218877    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095800
	I0419 17:40:22.218910    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:22.218910    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:22.218962    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:22.219273    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:22.414366    6592 request.go:629] Waited for 189.6078ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:22.414688    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:22.414763    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:22.414763    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:22.414763    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:22.415521    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:22.422357    6592 pod_ready.go:92] pod "kube-controller-manager-ha-095800" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:22.422357    6592 pod_ready.go:81] duration metric: took 397.2607ms for pod "kube-controller-manager-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:22.422490    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:22.620926    6592 request.go:629] Waited for 198.0742ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095800-m02
	I0419 17:40:22.621022    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095800-m02
	I0419 17:40:22.621022    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:22.621152    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:22.621152    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:22.621996    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:22.822253    6592 request.go:629] Waited for 192.6322ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:40:22.822253    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:40:22.822253    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:22.822253    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:22.822547    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:22.823190    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:22.828672    6592 pod_ready.go:92] pod "kube-controller-manager-ha-095800-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:22.828750    6592 pod_ready.go:81] duration metric: took 406.2586ms for pod "kube-controller-manager-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:22.828750    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-095800-m03" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:23.026425    6592 request.go:629] Waited for 197.4078ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095800-m03
	I0419 17:40:23.026516    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095800-m03
	I0419 17:40:23.026516    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:23.026516    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:23.026516    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:23.026915    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:23.223351    6592 request.go:629] Waited for 189.653ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:23.223519    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:23.223642    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:23.223681    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:23.223723    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:23.227877    6592 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 17:40:23.229030    6592 pod_ready.go:92] pod "kube-controller-manager-ha-095800-m03" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:23.229030    6592 pod_ready.go:81] duration metric: took 400.2798ms for pod "kube-controller-manager-ha-095800-m03" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:23.229206    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4nldk" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:23.425877    6592 request.go:629] Waited for 196.5988ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4nldk
	I0419 17:40:23.426148    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4nldk
	I0419 17:40:23.426209    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:23.426209    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:23.426209    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:23.437207    6592 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0419 17:40:23.616332    6592 request.go:629] Waited for 178.3307ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:40:23.616649    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:40:23.616710    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:23.616772    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:23.616772    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:23.619753    6592 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:40:23.623566    6592 pod_ready.go:92] pod "kube-proxy-4nldk" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:23.623566    6592 pod_ready.go:81] duration metric: took 394.3594ms for pod "kube-proxy-4nldk" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:23.623566    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5dp8h" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:23.821177    6592 request.go:629] Waited for 196.706ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5dp8h
	I0419 17:40:23.821206    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5dp8h
	I0419 17:40:23.821206    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:23.821206    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:23.821206    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:23.826556    6592 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:40:24.020696    6592 request.go:629] Waited for 193.0611ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:24.020883    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:24.020883    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:24.020883    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:24.020883    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:24.034179    6592 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0419 17:40:24.034462    6592 pod_ready.go:92] pod "kube-proxy-5dp8h" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:24.034462    6592 pod_ready.go:81] duration metric: took 410.8949ms for pod "kube-proxy-5dp8h" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:24.035004    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vq826" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:24.227241    6592 request.go:629] Waited for 192.1818ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vq826
	I0419 17:40:24.227241    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vq826
	I0419 17:40:24.227241    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:24.227241    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:24.227241    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:24.239234    6592 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0419 17:40:24.424820    6592 request.go:629] Waited for 184.277ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:24.425093    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:24.425157    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:24.425157    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:24.425157    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:24.433275    6592 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 17:40:24.434400    6592 pod_ready.go:92] pod "kube-proxy-vq826" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:24.434400    6592 pod_ready.go:81] duration metric: took 399.3954ms for pod "kube-proxy-vq826" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:24.434400    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:24.627095    6592 request.go:629] Waited for 192.6938ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095800
	I0419 17:40:24.627095    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095800
	I0419 17:40:24.627095    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:24.627095    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:24.627095    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:24.627552    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:24.827219    6592 request.go:629] Waited for 195.1688ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:24.827296    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800
	I0419 17:40:24.827406    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:24.827406    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:24.827406    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:24.828035    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:24.834608    6592 pod_ready.go:92] pod "kube-scheduler-ha-095800" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:24.835160    6592 pod_ready.go:81] duration metric: took 400.7587ms for pod "kube-scheduler-ha-095800" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:24.835160    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:25.015561    6592 request.go:629] Waited for 180.2485ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095800-m02
	I0419 17:40:25.015793    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095800-m02
	I0419 17:40:25.015847    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:25.015894    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:25.015894    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:25.021165    6592 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 17:40:25.216966    6592 request.go:629] Waited for 194.3002ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:40:25.217221    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m02
	I0419 17:40:25.217284    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:25.217323    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:25.217338    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:25.218007    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:25.222484    6592 pod_ready.go:92] pod "kube-scheduler-ha-095800-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:25.222484    6592 pod_ready.go:81] duration metric: took 387.3235ms for pod "kube-scheduler-ha-095800-m02" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:25.222484    6592 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-095800-m03" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:25.426512    6592 request.go:629] Waited for 203.4197ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095800-m03
	I0419 17:40:25.426512    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095800-m03
	I0419 17:40:25.426512    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:25.426512    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:25.426512    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:25.427029    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:25.616024    6592 request.go:629] Waited for 182.6037ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:25.616287    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes/ha-095800-m03
	I0419 17:40:25.616287    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:25.616323    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:25.616323    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:25.622967    6592 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 17:40:25.623672    6592 pod_ready.go:92] pod "kube-scheduler-ha-095800-m03" in "kube-system" namespace has status "Ready":"True"
	I0419 17:40:25.623672    6592 pod_ready.go:81] duration metric: took 401.1865ms for pod "kube-scheduler-ha-095800-m03" in "kube-system" namespace to be "Ready" ...
	I0419 17:40:25.624207    6592 pod_ready.go:38] duration metric: took 6.4064296s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 17:40:25.624207    6592 api_server.go:52] waiting for apiserver process to appear ...
	I0419 17:40:25.638890    6592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 17:40:25.667271    6592 api_server.go:72] duration metric: took 15.3624731s to wait for apiserver process to appear ...
	I0419 17:40:25.667308    6592 api_server.go:88] waiting for apiserver healthz status ...
	I0419 17:40:25.667308    6592 api_server.go:253] Checking apiserver healthz at https://172.19.32.218:8443/healthz ...
	I0419 17:40:25.675063    6592 api_server.go:279] https://172.19.32.218:8443/healthz returned 200:
	ok
	I0419 17:40:25.676474    6592 round_trippers.go:463] GET https://172.19.32.218:8443/version
	I0419 17:40:25.676544    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:25.676544    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:25.676544    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:25.676795    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:25.676795    6592 api_server.go:141] control plane version: v1.30.0
	I0419 17:40:25.676795    6592 api_server.go:131] duration metric: took 9.4866ms to wait for apiserver health ...
	I0419 17:40:25.676795    6592 system_pods.go:43] waiting for kube-system pods to appear ...
	I0419 17:40:25.819508    6592 request.go:629] Waited for 142.7128ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods
	I0419 17:40:25.819979    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods
	I0419 17:40:25.819979    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:25.819979    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:25.820089    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:25.831669    6592 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0419 17:40:25.842496    6592 system_pods.go:59] 24 kube-system pods found
	I0419 17:40:25.842496    6592 system_pods.go:61] "coredns-7db6d8ff4d-7mk28" [e9d98fbb-21cc-4618-9709-0b27986c63b1] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "coredns-7db6d8ff4d-vklb9" [a1f46798-9bf9-4abe-9d6d-573902a0d373] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "etcd-ha-095800" [1aaf32fa-58bb-40f3-a162-21259eb4f376] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "etcd-ha-095800-m02" [5b0fc0be-2f86-4758-b8eb-aeb31245afd7] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "etcd-ha-095800-m03" [8532b3ac-29de-4ca5-bfc9-68af08e21e6c] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "kindnet-76q26" [a98d461e-7b24-43a6-b11b-4875d803e532] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "kindnet-7j4cr" [92ce62b8-71b2-4deb-b295-cf938509a4e5] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "kindnet-kpn69" [49ffd8bc-d455-4f64-9822-e2d363df7cc7] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "kube-apiserver-ha-095800" [ebaad661-6759-415e-b65f-14d6ffb46853] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "kube-apiserver-ha-095800-m02" [99267604-9885-472a-aab9-eda6b150457d] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "kube-apiserver-ha-095800-m03" [4085bd90-5449-4c48-9d26-f2ff9c364b8b] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "kube-controller-manager-ha-095800" [dc9b9d64-b78b-44e3-a7f6-26ba6007b6dc] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "kube-controller-manager-ha-095800-m02" [534ea924-2ff9-48ec-a02c-ce23e4c47324] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "kube-controller-manager-ha-095800-m03" [f94ddaec-87d7-41f1-88f5-ec9ef37eb9a5] Running
	I0419 17:40:25.842496    6592 system_pods.go:61] "kube-proxy-4nldk" [79c714ec-b6ec-4cff-86fb-f560bed67202] Running
	I0419 17:40:25.843031    6592 system_pods.go:61] "kube-proxy-5dp8h" [4a95a0be-301a-482f-a714-3f918af5832c] Running
	I0419 17:40:25.843031    6592 system_pods.go:61] "kube-proxy-vq826" [d2b22474-6974-4cbd-8565-95facc3c817e] Running
	I0419 17:40:25.843031    6592 system_pods.go:61] "kube-scheduler-ha-095800" [af0f5d53-c6ab-4235-b9a2-ce0a371ff55f] Running
	I0419 17:40:25.843031    6592 system_pods.go:61] "kube-scheduler-ha-095800-m02" [000d5f12-1c3f-41ba-b0dd-696da8c6b8ad] Running
	I0419 17:40:25.843031    6592 system_pods.go:61] "kube-scheduler-ha-095800-m03" [c9432782-9134-4e45-b8c4-8585290ca2fc] Running
	I0419 17:40:25.843031    6592 system_pods.go:61] "kube-vip-ha-095800" [2fe74317-1ff4-4147-ae17-f2f31f4f06ba] Running
	I0419 17:40:25.843031    6592 system_pods.go:61] "kube-vip-ha-095800-m02" [e80eec5a-c346-4f90-a843-b6ed2d111f0b] Running
	I0419 17:40:25.843031    6592 system_pods.go:61] "kube-vip-ha-095800-m03" [5da00673-3a8b-41ac-8b5a-ec217012aeee] Running
	I0419 17:40:25.843031    6592 system_pods.go:61] "storage-provisioner" [f58269e6-1ef1-442a-972b-cc05662b174c] Running
	I0419 17:40:25.843031    6592 system_pods.go:74] duration metric: took 166.2358ms to wait for pod list to return data ...
	I0419 17:40:25.843031    6592 default_sa.go:34] waiting for default service account to be created ...
	I0419 17:40:26.019554    6592 request.go:629] Waited for 176.0728ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/default/serviceaccounts
	I0419 17:40:26.019554    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/default/serviceaccounts
	I0419 17:40:26.019554    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:26.019554    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:26.019554    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:26.020328    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:26.024992    6592 default_sa.go:45] found service account: "default"
	I0419 17:40:26.025061    6592 default_sa.go:55] duration metric: took 182.0303ms for default service account to be created ...
	I0419 17:40:26.025061    6592 system_pods.go:116] waiting for k8s-apps to be running ...
	I0419 17:40:26.214983    6592 request.go:629] Waited for 189.3667ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods
	I0419 17:40:26.215151    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/namespaces/kube-system/pods
	I0419 17:40:26.215185    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:26.215185    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:26.215185    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:26.217978    6592 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 17:40:26.236655    6592 system_pods.go:86] 24 kube-system pods found
	I0419 17:40:26.236718    6592 system_pods.go:89] "coredns-7db6d8ff4d-7mk28" [e9d98fbb-21cc-4618-9709-0b27986c63b1] Running
	I0419 17:40:26.236718    6592 system_pods.go:89] "coredns-7db6d8ff4d-vklb9" [a1f46798-9bf9-4abe-9d6d-573902a0d373] Running
	I0419 17:40:26.236718    6592 system_pods.go:89] "etcd-ha-095800" [1aaf32fa-58bb-40f3-a162-21259eb4f376] Running
	I0419 17:40:26.236783    6592 system_pods.go:89] "etcd-ha-095800-m02" [5b0fc0be-2f86-4758-b8eb-aeb31245afd7] Running
	I0419 17:40:26.236783    6592 system_pods.go:89] "etcd-ha-095800-m03" [8532b3ac-29de-4ca5-bfc9-68af08e21e6c] Running
	I0419 17:40:26.236783    6592 system_pods.go:89] "kindnet-76q26" [a98d461e-7b24-43a6-b11b-4875d803e532] Running
	I0419 17:40:26.236835    6592 system_pods.go:89] "kindnet-7j4cr" [92ce62b8-71b2-4deb-b295-cf938509a4e5] Running
	I0419 17:40:26.236853    6592 system_pods.go:89] "kindnet-kpn69" [49ffd8bc-d455-4f64-9822-e2d363df7cc7] Running
	I0419 17:40:26.236853    6592 system_pods.go:89] "kube-apiserver-ha-095800" [ebaad661-6759-415e-b65f-14d6ffb46853] Running
	I0419 17:40:26.236853    6592 system_pods.go:89] "kube-apiserver-ha-095800-m02" [99267604-9885-472a-aab9-eda6b150457d] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-apiserver-ha-095800-m03" [4085bd90-5449-4c48-9d26-f2ff9c364b8b] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-controller-manager-ha-095800" [dc9b9d64-b78b-44e3-a7f6-26ba6007b6dc] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-controller-manager-ha-095800-m02" [534ea924-2ff9-48ec-a02c-ce23e4c47324] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-controller-manager-ha-095800-m03" [f94ddaec-87d7-41f1-88f5-ec9ef37eb9a5] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-proxy-4nldk" [79c714ec-b6ec-4cff-86fb-f560bed67202] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-proxy-5dp8h" [4a95a0be-301a-482f-a714-3f918af5832c] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-proxy-vq826" [d2b22474-6974-4cbd-8565-95facc3c817e] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-scheduler-ha-095800" [af0f5d53-c6ab-4235-b9a2-ce0a371ff55f] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-scheduler-ha-095800-m02" [000d5f12-1c3f-41ba-b0dd-696da8c6b8ad] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-scheduler-ha-095800-m03" [c9432782-9134-4e45-b8c4-8585290ca2fc] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-vip-ha-095800" [2fe74317-1ff4-4147-ae17-f2f31f4f06ba] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-vip-ha-095800-m02" [e80eec5a-c346-4f90-a843-b6ed2d111f0b] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "kube-vip-ha-095800-m03" [5da00673-3a8b-41ac-8b5a-ec217012aeee] Running
	I0419 17:40:26.236907    6592 system_pods.go:89] "storage-provisioner" [f58269e6-1ef1-442a-972b-cc05662b174c] Running
	I0419 17:40:26.236907    6592 system_pods.go:126] duration metric: took 211.8447ms to wait for k8s-apps to be running ...
	I0419 17:40:26.236907    6592 system_svc.go:44] waiting for kubelet service to be running ....
	I0419 17:40:26.246212    6592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 17:40:26.287178    6592 system_svc.go:56] duration metric: took 50.2154ms WaitForService to wait for kubelet
	I0419 17:40:26.287240    6592 kubeadm.go:576] duration metric: took 15.9824407s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 17:40:26.287313    6592 node_conditions.go:102] verifying NodePressure condition ...
	I0419 17:40:26.423817    6592 request.go:629] Waited for 136.3699ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.32.218:8443/api/v1/nodes
	I0419 17:40:26.423817    6592 round_trippers.go:463] GET https://172.19.32.218:8443/api/v1/nodes
	I0419 17:40:26.423817    6592 round_trippers.go:469] Request Headers:
	I0419 17:40:26.423817    6592 round_trippers.go:473]     Accept: application/json, */*
	I0419 17:40:26.423817    6592 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 17:40:26.424469    6592 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 17:40:26.431194    6592 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 17:40:26.431194    6592 node_conditions.go:123] node cpu capacity is 2
	I0419 17:40:26.431194    6592 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 17:40:26.431194    6592 node_conditions.go:123] node cpu capacity is 2
	I0419 17:40:26.431194    6592 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 17:40:26.431194    6592 node_conditions.go:123] node cpu capacity is 2
	I0419 17:40:26.431194    6592 node_conditions.go:105] duration metric: took 143.8802ms to run NodePressure ...
	I0419 17:40:26.431194    6592 start.go:240] waiting for startup goroutines ...
	I0419 17:40:26.431798    6592 start.go:254] writing updated cluster config ...
	I0419 17:40:26.445515    6592 ssh_runner.go:195] Run: rm -f paused
	I0419 17:40:26.594780    6592 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0419 17:40:26.598128    6592 out.go:177] * Done! kubectl is now configured to use "ha-095800" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 20 00:32:58 ha-095800 dockerd[1325]: time="2024-04-20T00:32:58.902488807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 20 00:32:58 ha-095800 dockerd[1325]: time="2024-04-20T00:32:58.902508307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:32:58 ha-095800 dockerd[1325]: time="2024-04-20T00:32:58.902684412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:32:58 ha-095800 dockerd[1325]: time="2024-04-20T00:32:58.953038911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 20 00:32:58 ha-095800 dockerd[1325]: time="2024-04-20T00:32:58.953120813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 20 00:32:58 ha-095800 dockerd[1325]: time="2024-04-20T00:32:58.953140214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:32:58 ha-095800 dockerd[1325]: time="2024-04-20T00:32:58.953366220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:41:03 ha-095800 dockerd[1325]: time="2024-04-20T00:41:03.394072984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 20 00:41:03 ha-095800 dockerd[1325]: time="2024-04-20T00:41:03.394248080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 20 00:41:03 ha-095800 dockerd[1325]: time="2024-04-20T00:41:03.394266280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:41:03 ha-095800 dockerd[1325]: time="2024-04-20T00:41:03.395362555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:41:03 ha-095800 cri-dockerd[1229]: time="2024-04-20T00:41:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/534cd974048a518352c11c7b4010b28e8e1f400ad1f4f9b6c123ccf10f57bcdb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 20 00:41:04 ha-095800 cri-dockerd[1229]: time="2024-04-20T00:41:04Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 20 00:41:05 ha-095800 dockerd[1325]: time="2024-04-20T00:41:05.014542285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 20 00:41:05 ha-095800 dockerd[1325]: time="2024-04-20T00:41:05.014826081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 20 00:41:05 ha-095800 dockerd[1325]: time="2024-04-20T00:41:05.014944979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:41:05 ha-095800 dockerd[1325]: time="2024-04-20T00:41:05.015313174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 00:42:06 ha-095800 dockerd[1319]: 2024/04/20 00:42:06 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 00:42:06 ha-095800 dockerd[1319]: 2024/04/20 00:42:06 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 00:42:06 ha-095800 dockerd[1319]: 2024/04/20 00:42:06 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 00:42:06 ha-095800 dockerd[1319]: 2024/04/20 00:42:06 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 00:42:06 ha-095800 dockerd[1319]: 2024/04/20 00:42:06 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 00:42:06 ha-095800 dockerd[1319]: 2024/04/20 00:42:06 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 00:42:06 ha-095800 dockerd[1319]: 2024/04/20 00:42:06 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 00:42:07 ha-095800 dockerd[1319]: 2024/04/20 00:42:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2e2ed01949e55       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago      Running             busybox                   0                   534cd974048a5       busybox-fc5497c4f-l275w
	c1612d89b19bd       cbb01a7bd410d                                                                                         27 minutes ago      Running             coredns                   0                   40281e245fac9       coredns-7db6d8ff4d-7mk28
	37bb284139899       cbb01a7bd410d                                                                                         27 minutes ago      Running             coredns                   0                   457723d9f67a4       coredns-7db6d8ff4d-vklb9
	4ddb9435774ce       6e38f40d628db                                                                                         27 minutes ago      Running             storage-provisioner       0                   47bf1e62b695a       storage-provisioner
	abcfe6bf3c3f8       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              27 minutes ago      Running             kindnet-cni               0                   aae48a51c7222       kindnet-kpn69
	b7a65c81f5f41       a0bf559e280cf                                                                                         27 minutes ago      Running             kube-proxy                0                   9271277bf64ed       kube-proxy-vq826
	6aa83e6a42148       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     28 minutes ago      Running             kube-vip                  0                   dd653687d8d91       kube-vip-ha-095800
	fd73a674b215d       c7aad43836fa5                                                                                         28 minutes ago      Running             kube-controller-manager   0                   8aeedfc48a54a       kube-controller-manager-ha-095800
	10fc813931a16       3861cfcd7c04c                                                                                         28 minutes ago      Running             etcd                      0                   70e54776183a8       etcd-ha-095800
	5b3201e921978       259c8277fcbbc                                                                                         28 minutes ago      Running             kube-scheduler            0                   c1ca7767dd253       kube-scheduler-ha-095800
	9ddfae1ff47d9       c42f13656d0b2                                                                                         28 minutes ago      Running             kube-apiserver            0                   33a33a7a208eb       kube-apiserver-ha-095800
	
	
	==> coredns [37bb28413989] <==
	[INFO] 10.244.0.4:49148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000240397s
	[INFO] 10.244.0.4:34643 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000105598s
	[INFO] 10.244.0.4:36915 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000274996s
	[INFO] 10.244.0.4:41152 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103699s
	[INFO] 10.244.0.4:59052 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000262697s
	[INFO] 10.244.0.4:56196 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073899s
	[INFO] 10.244.2.2:53328 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139098s
	[INFO] 10.244.2.2:38072 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000062099s
	[INFO] 10.244.2.2:58488 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000088899s
	[INFO] 10.244.2.2:55087 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000068099s
	[INFO] 10.244.1.2:45805 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000268096s
	[INFO] 10.244.1.2:55492 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078199s
	[INFO] 10.244.0.4:58168 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000351595s
	[INFO] 10.244.0.4:41098 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060199s
	[INFO] 10.244.2.2:51023 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115198s
	[INFO] 10.244.2.2:49126 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068699s
	[INFO] 10.244.1.2:43231 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000207897s
	[INFO] 10.244.1.2:44051 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103999s
	[INFO] 10.244.0.4:38322 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150597s
	[INFO] 10.244.0.4:35307 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149798s
	[INFO] 10.244.0.4:47169 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080399s
	[INFO] 10.244.2.2:39439 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151698s
	[INFO] 10.244.2.2:39046 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154498s
	[INFO] 10.244.2.2:55199 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060499s
	[INFO] 10.244.2.2:47027 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000122398s
	
	
	==> coredns [c1612d89b19b] <==
	[INFO] 10.244.1.2:37673 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.22739666s
	[INFO] 10.244.1.2:57934 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.019259825s
	[INFO] 10.244.1.2:47705 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.113470124s
	[INFO] 10.244.0.4:43777 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144398s
	[INFO] 10.244.0.4:34954 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000177197s
	[INFO] 10.244.0.4:37850 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000124799s
	[INFO] 10.244.2.2:44073 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000105899s
	[INFO] 10.244.1.2:41954 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000260396s
	[INFO] 10.244.1.2:33550 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000195998s
	[INFO] 10.244.1.2:54754 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134398s
	[INFO] 10.244.0.4:55985 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000198097s
	[INFO] 10.244.0.4:41839 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012621719s
	[INFO] 10.244.2.2:38470 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000231397s
	[INFO] 10.244.2.2:53036 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000175798s
	[INFO] 10.244.2.2:59372 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064699s
	[INFO] 10.244.2.2:40909 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163398s
	[INFO] 10.244.1.2:42257 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105599s
	[INFO] 10.244.1.2:57777 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086398s
	[INFO] 10.244.0.4:37976 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000166798s
	[INFO] 10.244.0.4:44012 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151998s
	[INFO] 10.244.2.2:35745 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076099s
	[INFO] 10.244.2.2:52538 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073999s
	[INFO] 10.244.1.2:42825 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179197s
	[INFO] 10.244.1.2:45424 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000136098s
	[INFO] 10.244.0.4:34015 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000138898s
	
	
	==> describe nodes <==
	Name:               ha-095800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-095800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-095800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_19T17_32_35_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:32:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-095800
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 01:00:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:56:53 +0000   Sat, 20 Apr 2024 00:32:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:56:53 +0000   Sat, 20 Apr 2024 00:32:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:56:53 +0000   Sat, 20 Apr 2024 00:32:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:56:53 +0000   Sat, 20 Apr 2024 00:32:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.32.218
	  Hostname:    ha-095800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b35e1ffdd6ce4e3ea019e383acec8f36
	  System UUID:                151afd6c-ea6d-2a4e-971e-0fd2cbdb7589
	  Boot ID:                    e2e9e6fa-ec8c-4a9a-8bee-e4bf0e45825d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-l275w              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-7db6d8ff4d-7mk28             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 coredns-7db6d8ff4d-vklb9             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-ha-095800                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kindnet-kpn69                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-ha-095800             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-ha-095800    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-vq826                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-ha-095800             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-vip-ha-095800                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27m   kube-proxy       
	  Normal  Starting                 28m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m   kubelet          Node ha-095800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m   kubelet          Node ha-095800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m   kubelet          Node ha-095800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27m   node-controller  Node ha-095800 event: Registered Node ha-095800 in Controller
	  Normal  NodeReady                27m   kubelet          Node ha-095800 status is now: NodeReady
	  Normal  RegisteredNode           24m   node-controller  Node ha-095800 event: Registered Node ha-095800 in Controller
	  Normal  RegisteredNode           20m   node-controller  Node ha-095800 event: Registered Node ha-095800 in Controller
	
	
	Name:               ha-095800-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-095800-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-095800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T17_36_26_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:36:19 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-095800-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:57:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 20 Apr 2024 00:56:45 +0000   Sat, 20 Apr 2024 00:57:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 20 Apr 2024 00:56:45 +0000   Sat, 20 Apr 2024 00:57:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 20 Apr 2024 00:56:45 +0000   Sat, 20 Apr 2024 00:57:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 20 Apr 2024 00:56:45 +0000   Sat, 20 Apr 2024 00:57:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.19.39.106
	  Hostname:    ha-095800-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 f647bbaeeda1463daba8367e17d89c0f
	  System UUID:                11ceb28a-344d-0d49-b8d6-41acde2b853d
	  Boot ID:                    0defed78-62f0-48e9-97c3-1c117ea2506d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dxkjp                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 etcd-ha-095800-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         24m
	  kube-system                 kindnet-7j4cr                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-apiserver-ha-095800-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-controller-manager-ha-095800-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-proxy-4nldk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-scheduler-ha-095800-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-vip-ha-095800-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24m                kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node ha-095800-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node ha-095800-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node ha-095800-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24m                node-controller  Node ha-095800-m02 event: Registered Node ha-095800-m02 in Controller
	  Normal  RegisteredNode           24m                node-controller  Node ha-095800-m02 event: Registered Node ha-095800-m02 in Controller
	  Normal  RegisteredNode           20m                node-controller  Node ha-095800-m02 event: Registered Node ha-095800-m02 in Controller
	  Normal  NodeNotReady             2m44s              node-controller  Node ha-095800-m02 status is now: NodeNotReady
	
	
	Name:               ha-095800-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-095800-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-095800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T17_40_09_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:40:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-095800-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 01:00:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:56:53 +0000   Sat, 20 Apr 2024 00:40:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:56:53 +0000   Sat, 20 Apr 2024 00:40:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:56:53 +0000   Sat, 20 Apr 2024 00:40:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:56:53 +0000   Sat, 20 Apr 2024 00:40:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.47.152
	  Hostname:    ha-095800-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 203d257fe0074ce3b8accd939db5e46a
	  System UUID:                064d6d88-fb2e-6249-b24d-461c3c2fcda0
	  Boot ID:                    a2da1b8d-69bd-4cc7-a1b7-5a0e9e9588ec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tmxkg                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 etcd-ha-095800-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kindnet-76q26                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	  kube-system                 kube-apiserver-ha-095800-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-ha-095800-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-5dp8h                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-ha-095800-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-vip-ha-095800-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node ha-095800-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node ha-095800-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node ha-095800-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node ha-095800-m03 event: Registered Node ha-095800-m03 in Controller
	  Normal  RegisteredNode           20m                node-controller  Node ha-095800-m03 event: Registered Node ha-095800-m03 in Controller
	  Normal  RegisteredNode           20m                node-controller  Node ha-095800-m03 event: Registered Node ha-095800-m03 in Controller
	
	
	Name:               ha-095800-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-095800-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-095800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T17_45_14_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:45:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-095800-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 01:00:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:55:57 +0000   Sat, 20 Apr 2024 00:45:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:55:57 +0000   Sat, 20 Apr 2024 00:45:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:55:57 +0000   Sat, 20 Apr 2024 00:45:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:55:57 +0000   Sat, 20 Apr 2024 00:45:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.41.16
	  Hostname:    ha-095800-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 2168643b199a4045814bf8bc0782b5a2
	  System UUID:                4fb0593d-8655-f54f-b711-c005e5ad68c7
	  Boot ID:                    eb19c73b-fb84-4dc3-a108-53251704c3a1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-94tsx       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-proxy-nnwht    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x2 over 15m)  kubelet          Node ha-095800-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x2 over 15m)  kubelet          Node ha-095800-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x2 over 15m)  kubelet          Node ha-095800-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node ha-095800-m04 event: Registered Node ha-095800-m04 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-095800-m04 event: Registered Node ha-095800-m04 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-095800-m04 event: Registered Node ha-095800-m04 in Controller
	  Normal  NodeReady                15m                kubelet          Node ha-095800-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.042630] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr20 00:31] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.167423] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[ +29.792396] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.095239] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.595379] systemd-fstab-generator[985]: Ignoring "noauto" option for root device
	[  +0.200936] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.229039] systemd-fstab-generator[1011]: Ignoring "noauto" option for root device
	[Apr20 00:32] systemd-fstab-generator[1182]: Ignoring "noauto" option for root device
	[  +0.200197] systemd-fstab-generator[1194]: Ignoring "noauto" option for root device
	[  +0.185601] systemd-fstab-generator[1207]: Ignoring "noauto" option for root device
	[  +0.292167] systemd-fstab-generator[1221]: Ignoring "noauto" option for root device
	[ +11.677495] systemd-fstab-generator[1311]: Ignoring "noauto" option for root device
	[  +0.114912] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.941757] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	[  +6.709210] systemd-fstab-generator[1718]: Ignoring "noauto" option for root device
	[  +0.097038] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.205748] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.867513] systemd-fstab-generator[2209]: Ignoring "noauto" option for root device
	[ +14.771126] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.801145] kauditd_printk_skb: 29 callbacks suppressed
	[Apr20 00:36] kauditd_printk_skb: 35 callbacks suppressed
	[Apr20 00:43] hrtimer: interrupt took 2402858 ns
	
	
	==> etcd [10fc813931a1] <==
	{"level":"warn","ts":"2024-04-20T01:00:40.888648Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:40.988696Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.088544Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.143493Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.188739Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.229395Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.239631Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.245111Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.265084Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.275421Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.284688Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.289251Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.291187Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.296531Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.310033Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.32168Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.330993Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.338076Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.342994Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.355109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.372975Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.382247Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.412253Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.42237Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-20T01:00:41.488083Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b54e7fd34aca7b60","from":"b54e7fd34aca7b60","remote-peer-id":"2ada780314b548b7","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 01:00:41 up 30 min,  0 users,  load average: 0.32, 0.33, 0.33
	Linux ha-095800 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [abcfe6bf3c3f] <==
	I0420 01:00:09.957008       1 main.go:250] Node ha-095800-m04 has CIDR [10.244.3.0/24] 
	I0420 01:00:19.976693       1 main.go:223] Handling node with IPs: map[172.19.32.218:{}]
	I0420 01:00:19.976794       1 main.go:227] handling current node
	I0420 01:00:19.976810       1 main.go:223] Handling node with IPs: map[172.19.39.106:{}]
	I0420 01:00:19.976817       1 main.go:250] Node ha-095800-m02 has CIDR [10.244.1.0/24] 
	I0420 01:00:19.977079       1 main.go:223] Handling node with IPs: map[172.19.47.152:{}]
	I0420 01:00:19.977109       1 main.go:250] Node ha-095800-m03 has CIDR [10.244.2.0/24] 
	I0420 01:00:19.977409       1 main.go:223] Handling node with IPs: map[172.19.41.16:{}]
	I0420 01:00:19.977541       1 main.go:250] Node ha-095800-m04 has CIDR [10.244.3.0/24] 
	I0420 01:00:29.995473       1 main.go:223] Handling node with IPs: map[172.19.32.218:{}]
	I0420 01:00:29.995621       1 main.go:227] handling current node
	I0420 01:00:29.995636       1 main.go:223] Handling node with IPs: map[172.19.39.106:{}]
	I0420 01:00:29.995854       1 main.go:250] Node ha-095800-m02 has CIDR [10.244.1.0/24] 
	I0420 01:00:29.996345       1 main.go:223] Handling node with IPs: map[172.19.47.152:{}]
	I0420 01:00:29.996707       1 main.go:250] Node ha-095800-m03 has CIDR [10.244.2.0/24] 
	I0420 01:00:29.997077       1 main.go:223] Handling node with IPs: map[172.19.41.16:{}]
	I0420 01:00:29.997092       1 main.go:250] Node ha-095800-m04 has CIDR [10.244.3.0/24] 
	I0420 01:00:40.016053       1 main.go:223] Handling node with IPs: map[172.19.32.218:{}]
	I0420 01:00:40.016202       1 main.go:227] handling current node
	I0420 01:00:40.016455       1 main.go:223] Handling node with IPs: map[172.19.39.106:{}]
	I0420 01:00:40.016673       1 main.go:250] Node ha-095800-m02 has CIDR [10.244.1.0/24] 
	I0420 01:00:40.017605       1 main.go:223] Handling node with IPs: map[172.19.47.152:{}]
	I0420 01:00:40.017645       1 main.go:250] Node ha-095800-m03 has CIDR [10.244.2.0/24] 
	I0420 01:00:40.017746       1 main.go:223] Handling node with IPs: map[172.19.41.16:{}]
	I0420 01:00:40.017794       1 main.go:250] Node ha-095800-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [9ddfae1ff47d] <==
	E0420 00:41:10.502886       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52048: use of closed network connection
	E0420 00:41:11.034557       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52050: use of closed network connection
	E0420 00:41:11.529945       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52052: use of closed network connection
	E0420 00:41:12.000231       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52054: use of closed network connection
	E0420 00:41:12.457499       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52056: use of closed network connection
	E0420 00:41:12.944793       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52058: use of closed network connection
	E0420 00:41:13.434584       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52060: use of closed network connection
	E0420 00:41:14.324688       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52063: use of closed network connection
	E0420 00:41:24.766093       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52065: use of closed network connection
	E0420 00:41:25.241133       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52068: use of closed network connection
	E0420 00:41:35.710060       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52070: use of closed network connection
	E0420 00:41:36.152452       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52072: use of closed network connection
	E0420 00:41:46.628651       1 conn.go:339] Error on socket receive: read tcp 172.19.47.254:8443->172.19.32.1:52074: use of closed network connection
	I0420 00:57:43.156128       1 trace.go:236] Trace[1902287298]: "Update" accept:application/json, */*,audit-id:0d283341-f3cc-4eca-a050-f1dd535a64d3,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (20-Apr-2024 00:57:42.626) (total time: 529ms):
	Trace[1902287298]: ["GuaranteedUpdate etcd3" audit-id:0d283341-f3cc-4eca-a050-f1dd535a64d3,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 529ms (00:57:42.626)
	Trace[1902287298]:  ---"Txn call completed" 526ms (00:57:43.155)]
	Trace[1902287298]: [529.666547ms] [529.666547ms] END
	I0420 00:57:43.156874       1 trace.go:236] Trace[2011074953]: "Update" accept:application/json, */*,audit-id:388a35dc-776f-40b5-a0d4-3853adf43781,client:172.19.32.218,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (20-Apr-2024 00:57:42.634) (total time: 522ms):
	Trace[2011074953]: ["GuaranteedUpdate etcd3" audit-id:388a35dc-776f-40b5-a0d4-3853adf43781,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 520ms (00:57:42.636)
	Trace[2011074953]:  ---"Txn call completed" 518ms (00:57:43.156)]
	Trace[2011074953]: [522.222409ms] [522.222409ms] END
	I0420 00:57:43.920203       1 trace.go:236] Trace[1596822974]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.19.32.218,type:*v1.Endpoints,resource:apiServerIPInfo (20-Apr-2024 00:57:43.230) (total time: 690ms):
	Trace[1596822974]: ---"Transaction prepared" 362ms (00:57:43.596)
	Trace[1596822974]: ---"Txn call completed" 323ms (00:57:43.920)
	Trace[1596822974]: [690.140546ms] [690.140546ms] END
	
	
	==> kube-controller-manager [fd73a674b215] <==
	I0420 00:40:03.375549       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-095800-m03\" does not exist"
	I0420 00:40:03.407207       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-095800-m03" podCIDRs=["10.244.2.0/24"]
	I0420 00:40:07.222491       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-095800-m03"
	I0420 00:41:02.246903       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="217.288076ms"
	I0420 00:41:02.529790       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="282.799891ms"
	I0420 00:41:02.672089       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="141.963982ms"
	I0420 00:41:02.733108       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.954119ms"
	I0420 00:41:02.733364       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.698µs"
	I0420 00:41:03.359430       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.099µs"
	I0420 00:41:04.321851       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.599µs"
	I0420 00:41:05.316948       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.786519ms"
	I0420 00:41:05.317528       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.199µs"
	I0420 00:41:05.497630       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.71754ms"
	I0420 00:41:05.498011       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="253.397µs"
	I0420 00:41:05.863924       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="118.125125ms"
	I0420 00:41:05.864826       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="766.889µs"
	E0420 00:45:12.907955       1 certificate_controller.go:146] Sync csr-wdqx6 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-wdqx6": the object has been modified; please apply your changes to the latest version and try again
	E0420 00:45:12.938393       1 certificate_controller.go:146] Sync csr-wdqx6 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-wdqx6": the object has been modified; please apply your changes to the latest version and try again
	I0420 00:45:13.015620       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-095800-m04\" does not exist"
	I0420 00:45:13.127177       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-095800-m04" podCIDRs=["10.244.3.0/24"]
	I0420 00:45:17.313423       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-095800-m04"
	I0420 00:45:37.067126       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-095800-m04"
	I0420 00:57:57.519973       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-095800-m04"
	I0420 00:57:57.641753       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.163347ms"
	I0420 00:57:57.644491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.499µs"
	
	
	==> kube-proxy [b7a65c81f5f4] <==
	I0420 00:32:50.078575       1 server_linux.go:69] "Using iptables proxy"
	I0420 00:32:50.124878       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.32.218"]
	I0420 00:32:50.223572       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 00:32:50.223719       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 00:32:50.223756       1 server_linux.go:165] "Using iptables Proxier"
	I0420 00:32:50.234624       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 00:32:50.241388       1 server.go:872] "Version info" version="v1.30.0"
	I0420 00:32:50.241441       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:32:50.308364       1 config.go:101] "Starting endpoint slice config controller"
	I0420 00:32:50.310350       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 00:32:50.310451       1 config.go:192] "Starting service config controller"
	I0420 00:32:50.310477       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 00:32:50.327055       1 config.go:319] "Starting node config controller"
	I0420 00:32:50.327072       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 00:32:50.410658       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 00:32:50.410774       1 shared_informer.go:320] Caches are synced for service config
	I0420 00:32:50.428546       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5b3201e92197] <==
	W0420 00:32:32.139575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0420 00:32:32.139846       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0420 00:32:32.145405       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0420 00:32:32.145555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0420 00:32:32.153658       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0420 00:32:32.153850       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0420 00:32:32.210124       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0420 00:32:32.210513       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0420 00:32:32.246495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 00:32:32.246698       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 00:32:32.263161       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0420 00:32:32.263302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0420 00:32:32.286496       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0420 00:32:32.287007       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0420 00:32:32.383039       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 00:32:32.383395       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0420 00:32:35.082995       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0420 00:40:03.496670       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5dp8h\": pod kube-proxy-5dp8h is already assigned to node \"ha-095800-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5dp8h" node="ha-095800-m03"
	E0420 00:40:03.496769       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 4a95a0be-301a-482f-a714-3f918af5832c(kube-system/kube-proxy-5dp8h) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5dp8h"
	E0420 00:40:03.496800       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5dp8h\": pod kube-proxy-5dp8h is already assigned to node \"ha-095800-m03\"" pod="kube-system/kube-proxy-5dp8h"
	I0420 00:40:03.496847       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5dp8h" node="ha-095800-m03"
	E0420 00:45:13.152529       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dt2cp\": pod kindnet-dt2cp is already assigned to node \"ha-095800-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-dt2cp" node="ha-095800-m04"
	E0420 00:45:13.152602       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 7ba55530-5493-4c22-b5b9-712d7c9cd5c2(kube-system/kindnet-dt2cp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-dt2cp"
	E0420 00:45:13.152622       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dt2cp\": pod kindnet-dt2cp is already assigned to node \"ha-095800-m04\"" pod="kube-system/kindnet-dt2cp"
	I0420 00:45:13.154515       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dt2cp" node="ha-095800-m04"
	
	
	==> kubelet <==
	Apr 20 00:56:34 ha-095800 kubelet[2216]: E0420 00:56:34.319952    2216 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:56:34 ha-095800 kubelet[2216]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:56:34 ha-095800 kubelet[2216]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:56:34 ha-095800 kubelet[2216]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:56:34 ha-095800 kubelet[2216]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:57:34 ha-095800 kubelet[2216]: E0420 00:57:34.315916    2216 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:57:34 ha-095800 kubelet[2216]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:57:34 ha-095800 kubelet[2216]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:57:34 ha-095800 kubelet[2216]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:57:34 ha-095800 kubelet[2216]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:58:34 ha-095800 kubelet[2216]: E0420 00:58:34.315557    2216 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:58:34 ha-095800 kubelet[2216]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:58:34 ha-095800 kubelet[2216]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:58:34 ha-095800 kubelet[2216]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:58:34 ha-095800 kubelet[2216]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 00:59:34 ha-095800 kubelet[2216]: E0420 00:59:34.322881    2216 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 00:59:34 ha-095800 kubelet[2216]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 00:59:34 ha-095800 kubelet[2216]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 00:59:34 ha-095800 kubelet[2216]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 00:59:34 ha-095800 kubelet[2216]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:00:34 ha-095800 kubelet[2216]: E0420 01:00:34.314787    2216 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:00:34 ha-095800 kubelet[2216]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:00:34 ha-095800 kubelet[2216]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:00:34 ha-095800 kubelet[2216]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:00:34 ha-095800 kubelet[2216]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 18:00:33.475900    9608 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-095800 -n ha-095800
E0419 18:00:44.604441    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-095800 -n ha-095800: (11.8681842s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-095800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (135.34s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (183.09s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-393300
E0419 18:30:44.589263    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p mount-start-2-393300: exit status 90 (2m51.8697875s)

                                                
                                                
-- stdout --
	* [mount-start-2-393300] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting minikube without Kubernetes in cluster mount-start-2-393300
	* Restarting existing hyperv VM for "mount-start-2-393300" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 18:28:01.631881   15100 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 20 01:29:25 mount-start-2-393300 systemd[1]: Starting Docker Application Container Engine...
	Apr 20 01:29:25 mount-start-2-393300 dockerd[660]: time="2024-04-20T01:29:25.315339429Z" level=info msg="Starting up"
	Apr 20 01:29:25 mount-start-2-393300 dockerd[660]: time="2024-04-20T01:29:25.316375044Z" level=info msg="containerd not running, starting managed containerd"
	Apr 20 01:29:25 mount-start-2-393300 dockerd[660]: time="2024-04-20T01:29:25.317495960Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.351014550Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.384987846Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.385052647Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.385141548Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.385161249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.385809958Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.385916860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.386144663Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.386250265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.386274965Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.386289165Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.388243294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.388927604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.391795346Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.391904847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.392060549Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.392079850Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.392705659Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.392846761Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.392867361Z" level=info msg="metadata content store policy set" policy=shared
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.394978192Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.395049793Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.395074693Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.395099294Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.395119094Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.395205095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.395756003Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.395900706Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396010007Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396033507Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396052108Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396068108Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396083508Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396099608Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396117909Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396141309Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396161109Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396177010Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396201410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396218110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396242011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396261711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396276611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396295911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396310412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396328412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396351712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396372612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396387713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396401913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396457514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396480214Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396503314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396522015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396543115Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396690617Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396786418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396890820Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396911320Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.396994522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.397021722Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.397044222Z" level=info msg="NRI interface is disabled by configuration."
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.397531429Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.397858534Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.398104638Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 20 01:29:25 mount-start-2-393300 dockerd[666]: time="2024-04-20T01:29:25.398165139Z" level=info msg="containerd successfully booted in 0.049928s"
	Apr 20 01:29:26 mount-start-2-393300 dockerd[660]: time="2024-04-20T01:29:26.365620848Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 20 01:29:26 mount-start-2-393300 dockerd[660]: time="2024-04-20T01:29:26.402241849Z" level=info msg="Loading containers: start."
	Apr 20 01:29:26 mount-start-2-393300 dockerd[660]: time="2024-04-20T01:29:26.642103816Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 20 01:29:26 mount-start-2-393300 dockerd[660]: time="2024-04-20T01:29:26.726570301Z" level=info msg="Loading containers: done."
	Apr 20 01:29:26 mount-start-2-393300 dockerd[660]: time="2024-04-20T01:29:26.752586604Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 20 01:29:26 mount-start-2-393300 dockerd[660]: time="2024-04-20T01:29:26.753079010Z" level=info msg="Daemon has completed initialization"
	Apr 20 01:29:26 mount-start-2-393300 dockerd[660]: time="2024-04-20T01:29:26.803311796Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 20 01:29:26 mount-start-2-393300 systemd[1]: Started Docker Application Container Engine.
	Apr 20 01:29:26 mount-start-2-393300 dockerd[660]: time="2024-04-20T01:29:26.803807902Z" level=info msg="API listen on [::]:2376"
	Apr 20 01:29:52 mount-start-2-393300 dockerd[660]: time="2024-04-20T01:29:52.197776310Z" level=info msg="Processing signal 'terminated'"
	Apr 20 01:29:52 mount-start-2-393300 systemd[1]: Stopping Docker Application Container Engine...
	Apr 20 01:29:52 mount-start-2-393300 dockerd[660]: time="2024-04-20T01:29:52.199399007Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 20 01:29:52 mount-start-2-393300 dockerd[660]: time="2024-04-20T01:29:52.199976806Z" level=info msg="Daemon shutdown complete"
	Apr 20 01:29:52 mount-start-2-393300 dockerd[660]: time="2024-04-20T01:29:52.200351205Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 20 01:29:52 mount-start-2-393300 dockerd[660]: time="2024-04-20T01:29:52.201191803Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 20 01:29:53 mount-start-2-393300 systemd[1]: docker.service: Deactivated successfully.
	Apr 20 01:29:53 mount-start-2-393300 systemd[1]: Stopped Docker Application Container Engine.
	Apr 20 01:29:53 mount-start-2-393300 systemd[1]: Starting Docker Application Container Engine...
	Apr 20 01:29:53 mount-start-2-393300 dockerd[1032]: time="2024-04-20T01:29:53.276457918Z" level=info msg="Starting up"
	Apr 20 01:30:53 mount-start-2-393300 dockerd[1032]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 20 01:30:53 mount-start-2-393300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 20 01:30:53 mount-start-2-393300 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 20 01:30:53 mount-start-2-393300 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:168: restart failed: "out/minikube-windows-amd64.exe start -p mount-start-2-393300" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-393300 -n mount-start-2-393300
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-393300 -n mount-start-2-393300: exit status 6 (11.2075332s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 18:30:53.522962   13040 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0419 18:31:04.539102   13040 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-393300" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-393300" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/RestartStopped (183.09s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (55.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-348000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-348000 -- exec busybox-fc5497c4f-2d5hs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-348000 -- exec busybox-fc5497c4f-2d5hs -- sh -c "ping -c 1 172.19.32.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-348000 -- exec busybox-fc5497c4f-2d5hs -- sh -c "ping -c 1 172.19.32.1": exit status 1 (10.4661981s)

                                                
                                                
-- stdout --
	PING 172.19.32.1 (172.19.32.1): 56 data bytes
	
	--- 172.19.32.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 18:39:12.972790    2276 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.19.32.1) from pod (busybox-fc5497c4f-2d5hs): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-348000 -- exec busybox-fc5497c4f-xnz2k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-348000 -- exec busybox-fc5497c4f-xnz2k -- sh -c "ping -c 1 172.19.32.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-348000 -- exec busybox-fc5497c4f-xnz2k -- sh -c "ping -c 1 172.19.32.1": exit status 1 (10.4503589s)

                                                
                                                
-- stdout --
	PING 172.19.32.1 (172.19.32.1): 56 data bytes
	
	--- 172.19.32.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 18:39:23.898082    7212 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.19.32.1) from pod (busybox-fc5497c4f-xnz2k): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-348000 -n multinode-348000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-348000 -n multinode-348000: (11.4253565s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 logs -n 25: (8.0819716s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-393300                           | mount-start-2-393300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:24 PDT | 19 Apr 24 18:26 PDT |
	|         | --memory=2048 --mount                             |                      |                   |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |                   |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |                   |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host         | mount-start-2-393300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:26 PDT |                     |
	|         | --profile mount-start-2-393300 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-393300 ssh -- ls                    | mount-start-2-393300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:26 PDT | 19 Apr 24 18:26 PDT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-393300                           | mount-start-1-393300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:26 PDT | 19 Apr 24 18:27 PDT |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-393300 ssh -- ls                    | mount-start-2-393300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:27 PDT | 19 Apr 24 18:27 PDT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-393300                           | mount-start-2-393300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:27 PDT | 19 Apr 24 18:28 PDT |
	| start   | -p mount-start-2-393300                           | mount-start-2-393300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:28 PDT |                     |
	| delete  | -p mount-start-2-393300                           | mount-start-2-393300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:31 PDT | 19 Apr 24 18:32 PDT |
	| delete  | -p mount-start-1-393300                           | mount-start-1-393300 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:32 PDT | 19 Apr 24 18:32 PDT |
	| start   | -p multinode-348000                               | multinode-348000     | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:32 PDT | 19 Apr 24 18:38 PDT |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-348000 -- apply -f                   | multinode-348000     | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:39 PDT | 19 Apr 24 18:39 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-348000 -- rollout                    | multinode-348000     | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:39 PDT | 19 Apr 24 18:39 PDT |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-348000 -- get pods -o                | multinode-348000     | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:39 PDT | 19 Apr 24 18:39 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-348000 -- get pods -o                | multinode-348000     | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:39 PDT | 19 Apr 24 18:39 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-348000 -- exec                       | multinode-348000     | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:39 PDT | 19 Apr 24 18:39 PDT |
	|         | busybox-fc5497c4f-2d5hs --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-348000 -- exec                       | multinode-348000     | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:39 PDT | 19 Apr 24 18:39 PDT |
	|         | busybox-fc5497c4f-xnz2k --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-348000 -- exec                       | multinode-348000     | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:39 PDT | 19 Apr 24 18:39 PDT |
	|         | busybox-fc5497c4f-2d5hs --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-348000 -- exec                       | multinode-348000     | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:39 PDT | 19 Apr 24 18:39 PDT |
	|         | busybox-fc5497c4f-xnz2k --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-348000 -- exec                       | multinode-348000     | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:39 PDT | 19 Apr 24 18:39 PDT |
	|         | busybox-fc5497c4f-2d5hs -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-348000 -- exec                       | multinode-348000     | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:39 PDT | 19 Apr 24 18:39 PDT |
	|         | busybox-fc5497c4f-xnz2k -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-348000 -- get pods -o                | multinode-348000     | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:39 PDT | 19 Apr 24 18:39 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-348000 -- exec                       | multinode-348000     | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:39 PDT | 19 Apr 24 18:39 PDT |
	|         | busybox-fc5497c4f-2d5hs                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-348000 -- exec                       | multinode-348000     | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:39 PDT |                     |
	|         | busybox-fc5497c4f-2d5hs -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.32.1                          |                      |                   |         |                     |                     |
	| kubectl | -p multinode-348000 -- exec                       | multinode-348000     | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:39 PDT | 19 Apr 24 18:39 PDT |
	|         | busybox-fc5497c4f-xnz2k                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-348000 -- exec                       | multinode-348000     | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:39 PDT |                     |
	|         | busybox-fc5497c4f-xnz2k -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.32.1                          |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 18:32:08
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 18:32:08.028326   13300 out.go:291] Setting OutFile to fd 884 ...
	I0419 18:32:08.028990   13300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 18:32:08.028990   13300 out.go:304] Setting ErrFile to fd 760...
	I0419 18:32:08.028990   13300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 18:32:08.062661   13300 out.go:298] Setting JSON to false
	I0419 18:32:08.067203   13300 start.go:129] hostinfo: {"hostname":"minikube1","uptime":15186,"bootTime":1713561541,"procs":205,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0419 18:32:08.067203   13300 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 18:32:08.071564   13300 out.go:177] * [multinode-348000] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0419 18:32:08.074916   13300 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 18:32:08.074916   13300 notify.go:220] Checking for updates...
	I0419 18:32:08.076332   13300 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 18:32:08.080719   13300 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0419 18:32:08.083226   13300 out.go:177]   - MINIKUBE_LOCATION=18703
	I0419 18:32:08.085634   13300 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 18:32:08.091421   13300 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 18:32:08.091849   13300 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 18:32:13.164730   13300 out.go:177] * Using the hyperv driver based on user configuration
	I0419 18:32:13.169507   13300 start.go:297] selected driver: hyperv
	I0419 18:32:13.169507   13300 start.go:901] validating driver "hyperv" against <nil>
	I0419 18:32:13.169507   13300 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 18:32:13.223203   13300 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 18:32:13.224414   13300 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 18:32:13.224414   13300 cni.go:84] Creating CNI manager for ""
	I0419 18:32:13.224414   13300 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0419 18:32:13.224414   13300 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0419 18:32:13.224414   13300 start.go:340] cluster config:
	{Name:multinode-348000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-348000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 18:32:13.224414   13300 iso.go:125] acquiring lock: {Name:mk297f2abb67cbbcd36490c866afe693892d0c05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 18:32:13.228979   13300 out.go:177] * Starting "multinode-348000" primary control-plane node in "multinode-348000" cluster
	I0419 18:32:13.230941   13300 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 18:32:13.230941   13300 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0419 18:32:13.230941   13300 cache.go:56] Caching tarball of preloaded images
	I0419 18:32:13.233489   13300 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0419 18:32:13.233489   13300 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 18:32:13.233489   13300 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\config.json ...
	I0419 18:32:13.233489   13300 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\config.json: {Name:mk452b4fe8761b376721271272e81469057d026b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:32:13.234739   13300 start.go:360] acquireMachinesLock for multinode-348000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 18:32:13.235547   13300 start.go:364] duration metric: took 808.5µs to acquireMachinesLock for "multinode-348000"
	I0419 18:32:13.235741   13300 start.go:93] Provisioning new machine with config: &{Name:multinode-348000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-348000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 18:32:13.235741   13300 start.go:125] createHost starting for "" (driver="hyperv")
	I0419 18:32:13.236144   13300 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 18:32:13.239917   13300 start.go:159] libmachine.API.Create for "multinode-348000" (driver="hyperv")
	I0419 18:32:13.239917   13300 client.go:168] LocalClient.Create starting
	I0419 18:32:13.240205   13300 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0419 18:32:13.240205   13300 main.go:141] libmachine: Decoding PEM data...
	I0419 18:32:13.240770   13300 main.go:141] libmachine: Parsing certificate...
	I0419 18:32:13.240900   13300 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0419 18:32:13.240900   13300 main.go:141] libmachine: Decoding PEM data...
	I0419 18:32:13.240900   13300 main.go:141] libmachine: Parsing certificate...
	I0419 18:32:13.240900   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0419 18:32:15.225331   13300 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0419 18:32:15.225331   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:32:15.225508   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0419 18:32:16.904461   13300 main.go:141] libmachine: [stdout =====>] : False
	
	I0419 18:32:16.904461   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:32:16.913527   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0419 18:32:18.384949   13300 main.go:141] libmachine: [stdout =====>] : True
	
	I0419 18:32:18.384949   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:32:18.397348   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0419 18:32:21.786033   13300 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0419 18:32:21.799514   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:32:21.802405   13300 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0419 18:32:22.301669   13300 main.go:141] libmachine: Creating SSH key...
	I0419 18:32:22.462637   13300 main.go:141] libmachine: Creating VM...
	I0419 18:32:22.462637   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0419 18:32:25.244851   13300 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0419 18:32:25.258396   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:32:25.258396   13300 main.go:141] libmachine: Using switch "Default Switch"
	I0419 18:32:25.258525   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0419 18:32:26.962146   13300 main.go:141] libmachine: [stdout =====>] : True
	
	I0419 18:32:26.962423   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:32:26.962423   13300 main.go:141] libmachine: Creating VHD
	I0419 18:32:26.962423   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0419 18:32:30.536726   13300 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5481F234-C5A2-47EC-9F56-3B7F557E2391
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0419 18:32:30.536726   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:32:30.536726   13300 main.go:141] libmachine: Writing magic tar header
	I0419 18:32:30.536948   13300 main.go:141] libmachine: Writing SSH key tar header
	I0419 18:32:30.549785   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0419 18:32:33.531178   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:32:33.531178   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:32:33.543867   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\disk.vhd' -SizeBytes 20000MB
	I0419 18:32:36.011637   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:32:36.011637   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:32:36.022311   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-348000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0419 18:32:39.466055   13300 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-348000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0419 18:32:39.466055   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:32:39.466055   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-348000 -DynamicMemoryEnabled $false
	I0419 18:32:41.568796   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:32:41.568796   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:32:41.571127   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-348000 -Count 2
	I0419 18:32:43.641569   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:32:43.641569   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:32:43.654502   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-348000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\boot2docker.iso'
	I0419 18:32:46.145842   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:32:46.145842   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:32:46.158289   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-348000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\disk.vhd'
	I0419 18:32:48.696364   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:32:48.696364   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:32:48.696364   13300 main.go:141] libmachine: Starting VM...
	I0419 18:32:48.696364   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-348000
	I0419 18:32:51.782627   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:32:51.782627   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:32:51.782627   13300 main.go:141] libmachine: Waiting for host to start...
	I0419 18:32:51.783511   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:32:53.977553   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:32:53.977553   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:32:53.982621   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:32:56.448852   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:32:56.448852   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:32:57.465030   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:32:59.551819   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:32:59.563483   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:32:59.563618   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:33:01.954925   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:33:01.967378   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:02.968076   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:33:05.038635   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:33:05.038635   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:05.040560   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:33:07.374834   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:33:07.374834   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:08.375188   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:33:10.458860   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:33:10.465717   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:10.465717   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:33:12.907056   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:33:12.907120   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:13.915263   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:33:16.010321   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:33:16.010321   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:16.022180   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:33:18.459103   13300 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:33:18.465486   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:18.465565   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:33:20.505239   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:33:20.512833   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:20.512833   13300 machine.go:94] provisionDockerMachine start ...
	I0419 18:33:20.512971   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:33:22.531772   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:33:22.543871   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:22.543961   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:33:24.985775   13300 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:33:24.994430   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:25.001347   13300 main.go:141] libmachine: Using SSH client type: native
	I0419 18:33:25.008877   13300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.231 22 <nil> <nil>}
	I0419 18:33:25.008877   13300 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 18:33:25.145195   13300 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0419 18:33:25.145195   13300 buildroot.go:166] provisioning hostname "multinode-348000"
	I0419 18:33:25.145195   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:33:27.135463   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:33:27.135463   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:27.135743   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:33:29.549185   13300 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:33:29.549185   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:29.568231   13300 main.go:141] libmachine: Using SSH client type: native
	I0419 18:33:29.568231   13300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.231 22 <nil> <nil>}
	I0419 18:33:29.568231   13300 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-348000 && echo "multinode-348000" | sudo tee /etc/hostname
	I0419 18:33:29.734241   13300 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-348000
	
	I0419 18:33:29.734241   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:33:31.757062   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:33:31.768747   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:31.768861   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:33:34.227956   13300 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:33:34.235414   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:34.241116   13300 main.go:141] libmachine: Using SSH client type: native
	I0419 18:33:34.241820   13300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.231 22 <nil> <nil>}
	I0419 18:33:34.241820   13300 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-348000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-348000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-348000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 18:33:34.382845   13300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 18:33:34.382961   13300 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0419 18:33:34.383019   13300 buildroot.go:174] setting up certificates
	I0419 18:33:34.383084   13300 provision.go:84] configureAuth start
	I0419 18:33:34.383152   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:33:36.348299   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:33:36.348299   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:36.361097   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:33:38.783437   13300 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:33:38.790154   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:38.790255   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:33:40.795045   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:33:40.795045   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:40.806511   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:33:43.235975   13300 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:33:43.235975   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:43.237563   13300 provision.go:143] copyHostCerts
	I0419 18:33:43.237889   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0419 18:33:43.238387   13300 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0419 18:33:43.238387   13300 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0419 18:33:43.238652   13300 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0419 18:33:43.239963   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0419 18:33:43.240292   13300 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0419 18:33:43.240292   13300 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0419 18:33:43.240292   13300 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0419 18:33:43.241542   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0419 18:33:43.241789   13300 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0419 18:33:43.241789   13300 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0419 18:33:43.242271   13300 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0419 18:33:43.243253   13300 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-348000 san=[127.0.0.1 172.19.42.231 localhost minikube multinode-348000]
	I0419 18:33:43.599451   13300 provision.go:177] copyRemoteCerts
	I0419 18:33:43.621768   13300 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 18:33:43.621768   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:33:45.617790   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:33:45.630277   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:45.630357   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:33:48.007466   13300 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:33:48.013406   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:48.013614   13300 sshutil.go:53] new ssh client: &{IP:172.19.42.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\id_rsa Username:docker}
	I0419 18:33:48.117633   13300 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4958558s)
	I0419 18:33:48.117743   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0419 18:33:48.118269   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0419 18:33:48.165713   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0419 18:33:48.166227   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0419 18:33:48.212755   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0419 18:33:48.213262   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0419 18:33:48.262163   13300 provision.go:87] duration metric: took 13.8789998s to configureAuth
	I0419 18:33:48.262225   13300 buildroot.go:189] setting minikube options for container-runtime
	I0419 18:33:48.262926   13300 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 18:33:48.262998   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:33:50.250478   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:33:50.262959   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:50.263045   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:33:52.622684   13300 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:33:52.622684   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:52.637440   13300 main.go:141] libmachine: Using SSH client type: native
	I0419 18:33:52.638108   13300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.231 22 <nil> <nil>}
	I0419 18:33:52.638108   13300 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0419 18:33:52.777858   13300 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0419 18:33:52.777858   13300 buildroot.go:70] root file system type: tmpfs
	I0419 18:33:52.777858   13300 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0419 18:33:52.777858   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:33:54.720654   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:33:54.720654   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:54.734099   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:33:57.159280   13300 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:33:57.159280   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:57.172636   13300 main.go:141] libmachine: Using SSH client type: native
	I0419 18:33:57.172636   13300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.231 22 <nil> <nil>}
	I0419 18:33:57.172636   13300 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0419 18:33:57.328884   13300 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0419 18:33:57.329019   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:33:59.325355   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:33:59.338180   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:33:59.338180   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:34:01.729225   13300 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:34:01.729225   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:34:01.750434   13300 main.go:141] libmachine: Using SSH client type: native
	I0419 18:34:01.750599   13300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.231 22 <nil> <nil>}
	I0419 18:34:01.750599   13300 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0419 18:34:03.879138   13300 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0419 18:34:03.879138   13300 machine.go:97] duration metric: took 43.3661144s to provisionDockerMachine
	I0419 18:34:03.879138   13300 client.go:171] duration metric: took 1m50.6389772s to LocalClient.Create
	I0419 18:34:03.879331   13300 start.go:167] duration metric: took 1m50.63917s to libmachine.API.Create "multinode-348000"
	I0419 18:34:03.879466   13300 start.go:293] postStartSetup for "multinode-348000" (driver="hyperv")
	I0419 18:34:03.879466   13300 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 18:34:03.890254   13300 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 18:34:03.890254   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:34:05.874064   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:34:05.874064   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:34:05.874152   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:34:08.264236   13300 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:34:08.267524   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:34:08.267729   13300 sshutil.go:53] new ssh client: &{IP:172.19.42.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\id_rsa Username:docker}
	I0419 18:34:08.366232   13300 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4759687s)
	I0419 18:34:08.389561   13300 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 18:34:08.397521   13300 command_runner.go:130] > NAME=Buildroot
	I0419 18:34:08.398102   13300 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0419 18:34:08.398102   13300 command_runner.go:130] > ID=buildroot
	I0419 18:34:08.398102   13300 command_runner.go:130] > VERSION_ID=2023.02.9
	I0419 18:34:08.398102   13300 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0419 18:34:08.398102   13300 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 18:34:08.398240   13300 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0419 18:34:08.398612   13300 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0419 18:34:08.399474   13300 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> 34162.pem in /etc/ssl/certs
	I0419 18:34:08.399578   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /etc/ssl/certs/34162.pem
	I0419 18:34:08.411481   13300 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 18:34:08.432859   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /etc/ssl/certs/34162.pem (1708 bytes)
	I0419 18:34:08.473017   13300 start.go:296] duration metric: took 4.593541s for postStartSetup
	I0419 18:34:08.476060   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:34:10.461415   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:34:10.461415   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:34:10.461501   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:34:12.927962   13300 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:34:12.927962   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:34:12.928257   13300 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\config.json ...
	I0419 18:34:12.931274   13300 start.go:128] duration metric: took 1m59.6952693s to createHost
	I0419 18:34:12.931274   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:34:14.937102   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:34:14.937102   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:34:14.948802   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:34:17.367478   13300 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:34:17.367478   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:34:17.379049   13300 main.go:141] libmachine: Using SSH client type: native
	I0419 18:34:17.379676   13300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.231 22 <nil> <nil>}
	I0419 18:34:17.379676   13300 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 18:34:17.509259   13300 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713576857.507537981
	
	I0419 18:34:17.509320   13300 fix.go:216] guest clock: 1713576857.507537981
	I0419 18:34:17.509320   13300 fix.go:229] Guest: 2024-04-19 18:34:17.507537981 -0700 PDT Remote: 2024-04-19 18:34:12.931274 -0700 PDT m=+124.999230801 (delta=4.576263981s)
	I0419 18:34:17.509445   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:34:19.504012   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:34:19.504012   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:34:19.516097   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:34:21.971576   13300 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:34:21.980376   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:34:21.986739   13300 main.go:141] libmachine: Using SSH client type: native
	I0419 18:34:21.987410   13300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.231 22 <nil> <nil>}
	I0419 18:34:21.987410   13300 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713576857
	I0419 18:34:22.129420   13300 main.go:141] libmachine: SSH cmd err, output: <nil>: Sat Apr 20 01:34:17 UTC 2024
	
	I0419 18:34:22.129483   13300 fix.go:236] clock set: Sat Apr 20 01:34:17 UTC 2024
	 (err=<nil>)
	I0419 18:34:22.129483   13300 start.go:83] releasing machines lock for "multinode-348000", held for 2m8.8936142s
	I0419 18:34:22.129606   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:34:24.120467   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:34:24.120467   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:34:24.132983   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:34:26.549891   13300 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:34:26.549891   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:34:26.562522   13300 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 18:34:26.562610   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:34:26.573782   13300 ssh_runner.go:195] Run: cat /version.json
	I0419 18:34:26.573782   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:34:28.623692   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:34:28.623692   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:34:28.623692   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:34:28.623692   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:34:28.637780   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:34:28.637911   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:34:31.177130   13300 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:34:31.183912   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:34:31.184156   13300 sshutil.go:53] new ssh client: &{IP:172.19.42.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\id_rsa Username:docker}
	I0419 18:34:31.211906   13300 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:34:31.217461   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:34:31.217523   13300 sshutil.go:53] new ssh client: &{IP:172.19.42.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\id_rsa Username:docker}
	I0419 18:34:31.415177   13300 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0419 18:34:31.415246   13300 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8527134s)
	I0419 18:34:31.415246   13300 command_runner.go:130] > {"iso_version": "v1.33.0", "kicbase_version": "v0.0.43-1713236840-18649", "minikube_version": "v1.33.0", "commit": "4bd203f0c710e7fdd30539846cf2bc6624a2556d"}
	I0419 18:34:31.415246   13300 ssh_runner.go:235] Completed: cat /version.json: (4.8414532s)
	I0419 18:34:31.430010   13300 ssh_runner.go:195] Run: systemctl --version
	I0419 18:34:31.438235   13300 command_runner.go:130] > systemd 252 (252)
	I0419 18:34:31.438550   13300 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0419 18:34:31.449534   13300 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0419 18:34:31.457901   13300 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0419 18:34:31.458784   13300 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 18:34:31.471866   13300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 18:34:31.500649   13300 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0419 18:34:31.500739   13300 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 18:34:31.500791   13300 start.go:494] detecting cgroup driver to use...
	I0419 18:34:31.500831   13300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 18:34:31.534859   13300 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0419 18:34:31.552357   13300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0419 18:34:31.585699   13300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0419 18:34:31.603306   13300 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0419 18:34:31.615908   13300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0419 18:34:31.650477   13300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 18:34:31.685412   13300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0419 18:34:31.715789   13300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 18:34:31.750538   13300 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 18:34:31.788555   13300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0419 18:34:31.821440   13300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0419 18:34:31.853764   13300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0419 18:34:31.887424   13300 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 18:34:31.903990   13300 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0419 18:34:31.916708   13300 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 18:34:31.948771   13300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:34:32.143588   13300 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0419 18:34:32.170295   13300 start.go:494] detecting cgroup driver to use...
	I0419 18:34:32.184142   13300 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0419 18:34:32.211440   13300 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0419 18:34:32.211440   13300 command_runner.go:130] > [Unit]
	I0419 18:34:32.211440   13300 command_runner.go:130] > Description=Docker Application Container Engine
	I0419 18:34:32.211440   13300 command_runner.go:130] > Documentation=https://docs.docker.com
	I0419 18:34:32.211440   13300 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0419 18:34:32.211440   13300 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0419 18:34:32.211440   13300 command_runner.go:130] > StartLimitBurst=3
	I0419 18:34:32.211440   13300 command_runner.go:130] > StartLimitIntervalSec=60
	I0419 18:34:32.211440   13300 command_runner.go:130] > [Service]
	I0419 18:34:32.211440   13300 command_runner.go:130] > Type=notify
	I0419 18:34:32.211440   13300 command_runner.go:130] > Restart=on-failure
	I0419 18:34:32.211440   13300 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0419 18:34:32.211440   13300 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0419 18:34:32.211440   13300 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0419 18:34:32.211440   13300 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0419 18:34:32.211440   13300 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0419 18:34:32.211440   13300 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0419 18:34:32.211440   13300 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0419 18:34:32.211440   13300 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0419 18:34:32.211440   13300 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0419 18:34:32.211440   13300 command_runner.go:130] > ExecStart=
	I0419 18:34:32.211440   13300 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0419 18:34:32.211440   13300 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0419 18:34:32.211440   13300 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0419 18:34:32.211440   13300 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0419 18:34:32.211440   13300 command_runner.go:130] > LimitNOFILE=infinity
	I0419 18:34:32.211440   13300 command_runner.go:130] > LimitNPROC=infinity
	I0419 18:34:32.211440   13300 command_runner.go:130] > LimitCORE=infinity
	I0419 18:34:32.211440   13300 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0419 18:34:32.211440   13300 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0419 18:34:32.211440   13300 command_runner.go:130] > TasksMax=infinity
	I0419 18:34:32.211440   13300 command_runner.go:130] > TimeoutStartSec=0
	I0419 18:34:32.211440   13300 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0419 18:34:32.211440   13300 command_runner.go:130] > Delegate=yes
	I0419 18:34:32.211440   13300 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0419 18:34:32.211440   13300 command_runner.go:130] > KillMode=process
	I0419 18:34:32.211440   13300 command_runner.go:130] > [Install]
	I0419 18:34:32.211986   13300 command_runner.go:130] > WantedBy=multi-user.target
	I0419 18:34:32.231646   13300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 18:34:32.272104   13300 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 18:34:32.319629   13300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 18:34:32.357505   13300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 18:34:32.404318   13300 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0419 18:34:32.468779   13300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 18:34:32.491847   13300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 18:34:32.527451   13300 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0419 18:34:32.541429   13300 ssh_runner.go:195] Run: which cri-dockerd
	I0419 18:34:32.543486   13300 command_runner.go:130] > /usr/bin/cri-dockerd
	I0419 18:34:32.560371   13300 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0419 18:34:32.578278   13300 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0419 18:34:32.621940   13300 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0419 18:34:32.826780   13300 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0419 18:34:33.016993   13300 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0419 18:34:33.017219   13300 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0419 18:34:33.066981   13300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:34:33.271592   13300 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 18:34:35.764031   13300 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4924341s)
	I0419 18:34:35.777460   13300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0419 18:34:35.813781   13300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 18:34:35.855495   13300 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0419 18:34:36.042052   13300 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0419 18:34:36.242054   13300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:34:36.431201   13300 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0419 18:34:36.478375   13300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 18:34:36.521388   13300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:34:36.719469   13300 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0419 18:34:36.826274   13300 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0419 18:34:36.840601   13300 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0419 18:34:36.852206   13300 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0419 18:34:36.852295   13300 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0419 18:34:36.852295   13300 command_runner.go:130] > Device: 0,22	Inode: 879         Links: 1
	I0419 18:34:36.852295   13300 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0419 18:34:36.852295   13300 command_runner.go:130] > Access: 2024-04-20 01:34:36.725607031 +0000
	I0419 18:34:36.852364   13300 command_runner.go:130] > Modify: 2024-04-20 01:34:36.725607031 +0000
	I0419 18:34:36.852364   13300 command_runner.go:130] > Change: 2024-04-20 01:34:36.728607002 +0000
	I0419 18:34:36.852364   13300 command_runner.go:130] >  Birth: -
	I0419 18:34:36.852451   13300 start.go:562] Will wait 60s for crictl version
	I0419 18:34:36.864332   13300 ssh_runner.go:195] Run: which crictl
	I0419 18:34:36.870066   13300 command_runner.go:130] > /usr/bin/crictl
	I0419 18:34:36.883908   13300 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 18:34:36.931118   13300 command_runner.go:130] > Version:  0.1.0
	I0419 18:34:36.931203   13300 command_runner.go:130] > RuntimeName:  docker
	I0419 18:34:36.931203   13300 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0419 18:34:36.931203   13300 command_runner.go:130] > RuntimeApiVersion:  v1
	I0419 18:34:36.931203   13300 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0419 18:34:36.940988   13300 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 18:34:36.971054   13300 command_runner.go:130] > 26.0.1
	I0419 18:34:36.981544   13300 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 18:34:37.012216   13300 command_runner.go:130] > 26.0.1
	I0419 18:34:37.017506   13300 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0419 18:34:37.017592   13300 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0419 18:34:37.021869   13300 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0419 18:34:37.021926   13300 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0419 18:34:37.021926   13300 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0419 18:34:37.021926   13300 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8c:b9:25 Flags:up|broadcast|multicast|running}
	I0419 18:34:37.025290   13300 ip.go:210] interface addr: fe80::ce04:318e:a1d8:4460/64
	I0419 18:34:37.025290   13300 ip.go:210] interface addr: 172.19.32.1/20
	I0419 18:34:37.036850   13300 ssh_runner.go:195] Run: grep 172.19.32.1	host.minikube.internal$ /etc/hosts
	I0419 18:34:37.037649   13300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.32.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 18:34:37.068037   13300 kubeadm.go:877] updating cluster {Name:multinode-348000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-348000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.42.231 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 18:34:37.068734   13300 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 18:34:37.078872   13300 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0419 18:34:37.098912   13300 docker.go:685] Got preloaded images: 
	I0419 18:34:37.098953   13300 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0419 18:34:37.114051   13300 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0419 18:34:37.131146   13300 command_runner.go:139] > {"Repositories":{}}
	I0419 18:34:37.145170   13300 ssh_runner.go:195] Run: which lz4
	I0419 18:34:37.151425   13300 command_runner.go:130] > /usr/bin/lz4
	I0419 18:34:37.151512   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0419 18:34:37.164127   13300 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0419 18:34:37.167628   13300 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0419 18:34:37.176062   13300 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0419 18:34:37.176300   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0419 18:34:39.465850   13300 docker.go:649] duration metric: took 2.3139987s to copy over tarball
	I0419 18:34:39.483492   13300 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0419 18:34:48.180644   13300 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6970851s)
	I0419 18:34:48.180725   13300 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0419 18:34:48.247386   13300 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0419 18:34:48.257166   13300 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.0":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.0":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.0":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e
07f7ac08e80ba0b"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.0":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0419 18:34:48.257166   13300 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0419 18:34:48.313287   13300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:34:48.521609   13300 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 18:34:51.859679   13300 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.337737s)
	I0419 18:34:51.870149   13300 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0419 18:34:51.892976   13300 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0419 18:34:51.893030   13300 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0419 18:34:51.893030   13300 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0419 18:34:51.893082   13300 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0419 18:34:51.893082   13300 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0419 18:34:51.893082   13300 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0419 18:34:51.893134   13300 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0419 18:34:51.893134   13300 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 18:34:51.893478   13300 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0419 18:34:51.893478   13300 cache_images.go:84] Images are preloaded, skipping loading
	I0419 18:34:51.893478   13300 kubeadm.go:928] updating node { 172.19.42.231 8443 v1.30.0 docker true true} ...
	I0419 18:34:51.893478   13300 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-348000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.42.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-348000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 18:34:51.912800   13300 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0419 18:34:51.938816   13300 command_runner.go:130] > cgroupfs
	I0419 18:34:51.943241   13300 cni.go:84] Creating CNI manager for ""
	I0419 18:34:51.943273   13300 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0419 18:34:51.943341   13300 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 18:34:51.943377   13300 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.42.231 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-348000 NodeName:multinode-348000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.42.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.42.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 18:34:51.943639   13300 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.42.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-348000"
	  kubeletExtraArgs:
	    node-ip: 172.19.42.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.42.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 18:34:51.956309   13300 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 18:34:51.977044   13300 command_runner.go:130] > kubeadm
	I0419 18:34:51.977118   13300 command_runner.go:130] > kubectl
	I0419 18:34:51.977118   13300 command_runner.go:130] > kubelet
	I0419 18:34:51.977118   13300 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 18:34:51.989862   13300 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0419 18:34:52.006858   13300 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0419 18:34:52.035830   13300 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 18:34:52.069005   13300 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0419 18:34:52.115760   13300 ssh_runner.go:195] Run: grep 172.19.42.231	control-plane.minikube.internal$ /etc/hosts
	I0419 18:34:52.121031   13300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.42.231	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 18:34:52.160900   13300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:34:52.362508   13300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 18:34:52.395681   13300 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000 for IP: 172.19.42.231
	I0419 18:34:52.395743   13300 certs.go:194] generating shared ca certs ...
	I0419 18:34:52.395773   13300 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:34:52.396064   13300 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0419 18:34:52.396898   13300 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0419 18:34:52.397124   13300 certs.go:256] generating profile certs ...
	I0419 18:34:52.397712   13300 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\client.key
	I0419 18:34:52.397929   13300 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\client.crt with IP's: []
	I0419 18:34:52.618196   13300 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\client.crt ...
	I0419 18:34:52.618196   13300 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\client.crt: {Name:mk3159341776e031f3617cccc43584a6542b02f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:34:52.620743   13300 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\client.key ...
	I0419 18:34:52.620743   13300 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\client.key: {Name:mkf9b8631e366f222f5da56ff33a90917609324e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:34:52.624572   13300 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key.3f1d2692
	I0419 18:34:52.624572   13300 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt.3f1d2692 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.42.231]
	I0419 18:34:52.708773   13300 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt.3f1d2692 ...
	I0419 18:34:52.708773   13300 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt.3f1d2692: {Name:mkba47055511f0bed983dedb529bd6a1514145b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:34:52.716406   13300 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key.3f1d2692 ...
	I0419 18:34:52.716406   13300 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key.3f1d2692: {Name:mk367b1519e71ab91f2319bb59ebc44404f39e9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:34:52.717858   13300 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt.3f1d2692 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt
	I0419 18:34:52.731980   13300 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key.3f1d2692 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key
	I0419 18:34:52.733899   13300 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\proxy-client.key
	I0419 18:34:52.734091   13300 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\proxy-client.crt with IP's: []
	I0419 18:34:53.001368   13300 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\proxy-client.crt ...
	I0419 18:34:53.001368   13300 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\proxy-client.crt: {Name:mkc5392cfaff27517128a9e4ac92108f134aa6b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:34:53.002074   13300 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\proxy-client.key ...
	I0419 18:34:53.002074   13300 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\proxy-client.key: {Name:mk1ca52f21d546506be3e302e77df6cd2e1b289a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:34:53.003326   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 18:34:53.004362   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0419 18:34:53.004536   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 18:34:53.004711   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 18:34:53.004899   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0419 18:34:53.005063   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0419 18:34:53.005183   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0419 18:34:53.011683   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0419 18:34:53.014364   13300 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem (1338 bytes)
	W0419 18:34:53.014953   13300 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416_empty.pem, impossibly tiny 0 bytes
	I0419 18:34:53.014953   13300 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0419 18:34:53.015228   13300 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0419 18:34:53.015482   13300 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0419 18:34:53.015712   13300 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0419 18:34:53.015966   13300 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem (1708 bytes)
	I0419 18:34:53.015966   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /usr/share/ca-certificates/34162.pem
	I0419 18:34:53.016558   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 18:34:53.016734   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem -> /usr/share/ca-certificates/3416.pem
	I0419 18:34:53.016998   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 18:34:53.070685   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 18:34:53.120234   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 18:34:53.166947   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 18:34:53.205832   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0419 18:34:53.256228   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0419 18:34:53.282309   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 18:34:53.336464   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0419 18:34:53.382472   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /usr/share/ca-certificates/34162.pem (1708 bytes)
	I0419 18:34:53.429379   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 18:34:53.480851   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem --> /usr/share/ca-certificates/3416.pem (1338 bytes)
	I0419 18:34:53.525926   13300 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0419 18:34:53.574826   13300 ssh_runner.go:195] Run: openssl version
	I0419 18:34:53.591309   13300 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0419 18:34:53.604793   13300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3416.pem && ln -fs /usr/share/ca-certificates/3416.pem /etc/ssl/certs/3416.pem"
	I0419 18:34:53.639230   13300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3416.pem
	I0419 18:34:53.646912   13300 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 20 00:10 /usr/share/ca-certificates/3416.pem
	I0419 18:34:53.646912   13300 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:10 /usr/share/ca-certificates/3416.pem
	I0419 18:34:53.659616   13300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3416.pem
	I0419 18:34:53.670028   13300 command_runner.go:130] > 51391683
	I0419 18:34:53.687300   13300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3416.pem /etc/ssl/certs/51391683.0"
	I0419 18:34:53.727249   13300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34162.pem && ln -fs /usr/share/ca-certificates/34162.pem /etc/ssl/certs/34162.pem"
	I0419 18:34:53.756405   13300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34162.pem
	I0419 18:34:53.765333   13300 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 20 00:10 /usr/share/ca-certificates/34162.pem
	I0419 18:34:53.765438   13300 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:10 /usr/share/ca-certificates/34162.pem
	I0419 18:34:53.777888   13300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34162.pem
	I0419 18:34:53.786095   13300 command_runner.go:130] > 3ec20f2e
	I0419 18:34:53.795968   13300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34162.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 18:34:53.831223   13300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 18:34:53.863525   13300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 18:34:53.868957   13300 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 20 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I0419 18:34:53.871977   13300 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 20 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I0419 18:34:53.883930   13300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 18:34:53.893136   13300 command_runner.go:130] > b5213941
	I0419 18:34:53.905777   13300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 18:34:53.942185   13300 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 18:34:53.947885   13300 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 18:34:53.948069   13300 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 18:34:53.948503   13300 kubeadm.go:391] StartCluster: {Name:multinode-348000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-348000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.42.231 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 18:34:53.958589   13300 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0419 18:34:53.989569   13300 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0419 18:34:54.014153   13300 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0419 18:34:54.014201   13300 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0419 18:34:54.014201   13300 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0419 18:34:54.027834   13300 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0419 18:34:54.064120   13300 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0419 18:34:54.081454   13300 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0419 18:34:54.081542   13300 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0419 18:34:54.081542   13300 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0419 18:34:54.081542   13300 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 18:34:54.081624   13300 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 18:34:54.081624   13300 kubeadm.go:156] found existing configuration files:
	
	I0419 18:34:54.095567   13300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0419 18:34:54.101306   13300 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 18:34:54.116304   13300 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 18:34:54.128113   13300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0419 18:34:54.162921   13300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0419 18:34:54.171745   13300 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 18:34:54.171745   13300 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 18:34:54.192676   13300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0419 18:34:54.224983   13300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0419 18:34:54.229570   13300 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 18:34:54.245184   13300 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 18:34:54.257376   13300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0419 18:34:54.293771   13300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0419 18:34:54.304196   13300 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 18:34:54.304196   13300 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 18:34:54.325193   13300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0419 18:34:54.348492   13300 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0419 18:34:54.806579   13300 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0419 18:34:54.806579   13300 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0419 18:35:08.878982   13300 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0419 18:35:08.879067   13300 command_runner.go:130] > [init] Using Kubernetes version: v1.30.0
	I0419 18:35:08.879186   13300 command_runner.go:130] > [preflight] Running pre-flight checks
	I0419 18:35:08.879186   13300 kubeadm.go:309] [preflight] Running pre-flight checks
	I0419 18:35:08.879488   13300 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0419 18:35:08.879488   13300 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0419 18:35:08.879604   13300 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0419 18:35:08.879604   13300 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0419 18:35:08.879604   13300 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0419 18:35:08.879604   13300 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0419 18:35:08.880161   13300 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0419 18:35:08.880161   13300 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0419 18:35:08.882693   13300 out.go:204]   - Generating certificates and keys ...
	I0419 18:35:08.882907   13300 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0419 18:35:08.883078   13300 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0419 18:35:08.883212   13300 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0419 18:35:08.883212   13300 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0419 18:35:08.883212   13300 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0419 18:35:08.883212   13300 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0419 18:35:08.883212   13300 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0419 18:35:08.883212   13300 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0419 18:35:08.883753   13300 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0419 18:35:08.883753   13300 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0419 18:35:08.883989   13300 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0419 18:35:08.884078   13300 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0419 18:35:08.884183   13300 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0419 18:35:08.884183   13300 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0419 18:35:08.884460   13300 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-348000] and IPs [172.19.42.231 127.0.0.1 ::1]
	I0419 18:35:08.884558   13300 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-348000] and IPs [172.19.42.231 127.0.0.1 ::1]
	I0419 18:35:08.884781   13300 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0419 18:35:08.884851   13300 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0419 18:35:08.885122   13300 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-348000] and IPs [172.19.42.231 127.0.0.1 ::1]
	I0419 18:35:08.885205   13300 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-348000] and IPs [172.19.42.231 127.0.0.1 ::1]
	I0419 18:35:08.885276   13300 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0419 18:35:08.885276   13300 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0419 18:35:08.885276   13300 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0419 18:35:08.885276   13300 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0419 18:35:08.885276   13300 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0419 18:35:08.885276   13300 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0419 18:35:08.885276   13300 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0419 18:35:08.885276   13300 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0419 18:35:08.885817   13300 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0419 18:35:08.885817   13300 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0419 18:35:08.885817   13300 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0419 18:35:08.886001   13300 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0419 18:35:08.886196   13300 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0419 18:35:08.886196   13300 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0419 18:35:08.886196   13300 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0419 18:35:08.886196   13300 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0419 18:35:08.886196   13300 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0419 18:35:08.886196   13300 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0419 18:35:08.886748   13300 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0419 18:35:08.886748   13300 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0419 18:35:08.886973   13300 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0419 18:35:08.892242   13300 out.go:204]   - Booting up control plane ...
	I0419 18:35:08.887025   13300 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0419 18:35:08.892779   13300 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0419 18:35:08.892779   13300 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0419 18:35:08.892994   13300 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0419 18:35:08.893044   13300 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0419 18:35:08.893169   13300 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0419 18:35:08.893169   13300 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0419 18:35:08.893169   13300 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 18:35:08.893169   13300 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 18:35:08.893169   13300 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 18:35:08.893169   13300 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 18:35:08.893169   13300 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0419 18:35:08.893169   13300 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0419 18:35:08.893169   13300 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0419 18:35:08.893169   13300 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0419 18:35:08.894103   13300 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0419 18:35:08.894103   13300 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0419 18:35:08.894103   13300 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.009762161s
	I0419 18:35:08.894103   13300 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.009762161s
	I0419 18:35:08.894103   13300 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0419 18:35:08.894103   13300 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0419 18:35:08.894103   13300 command_runner.go:130] > [api-check] The API server is healthy after 7.002950119s
	I0419 18:35:08.894103   13300 kubeadm.go:309] [api-check] The API server is healthy after 7.002950119s
	I0419 18:35:08.894103   13300 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0419 18:35:08.894103   13300 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0419 18:35:08.894103   13300 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0419 18:35:08.894103   13300 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0419 18:35:08.894103   13300 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0419 18:35:08.894103   13300 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0419 18:35:08.894103   13300 kubeadm.go:309] [mark-control-plane] Marking the node multinode-348000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0419 18:35:08.894103   13300 command_runner.go:130] > [mark-control-plane] Marking the node multinode-348000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0419 18:35:08.894103   13300 kubeadm.go:309] [bootstrap-token] Using token: tn58s3.3j7r2ur6gzwi80gc
	I0419 18:35:08.894103   13300 command_runner.go:130] > [bootstrap-token] Using token: tn58s3.3j7r2ur6gzwi80gc
	I0419 18:35:08.904033   13300 out.go:204]   - Configuring RBAC rules ...
	I0419 18:35:08.904735   13300 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0419 18:35:08.904790   13300 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0419 18:35:08.904959   13300 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0419 18:35:08.904959   13300 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0419 18:35:08.905212   13300 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0419 18:35:08.905212   13300 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0419 18:35:08.905670   13300 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0419 18:35:08.905670   13300 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0419 18:35:08.905829   13300 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0419 18:35:08.905829   13300 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0419 18:35:08.906142   13300 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0419 18:35:08.906142   13300 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0419 18:35:08.906442   13300 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0419 18:35:08.906442   13300 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0419 18:35:08.906669   13300 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0419 18:35:08.906724   13300 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0419 18:35:08.906827   13300 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0419 18:35:08.906827   13300 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0419 18:35:08.906827   13300 kubeadm.go:309] 
	I0419 18:35:08.906988   13300 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0419 18:35:08.906988   13300 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0419 18:35:08.906988   13300 kubeadm.go:309] 
	I0419 18:35:08.907313   13300 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0419 18:35:08.907313   13300 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0419 18:35:08.907313   13300 kubeadm.go:309] 
	I0419 18:35:08.907313   13300 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0419 18:35:08.907313   13300 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0419 18:35:08.907757   13300 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0419 18:35:08.907757   13300 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0419 18:35:08.907757   13300 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0419 18:35:08.907757   13300 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0419 18:35:08.907757   13300 kubeadm.go:309] 
	I0419 18:35:08.907757   13300 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0419 18:35:08.907757   13300 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0419 18:35:08.907757   13300 kubeadm.go:309] 
	I0419 18:35:08.907757   13300 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0419 18:35:08.908348   13300 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0419 18:35:08.908348   13300 kubeadm.go:309] 
	I0419 18:35:08.908500   13300 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0419 18:35:08.908551   13300 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0419 18:35:08.908821   13300 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0419 18:35:08.908821   13300 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0419 18:35:08.908821   13300 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0419 18:35:08.909132   13300 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0419 18:35:08.909240   13300 kubeadm.go:309] 
	I0419 18:35:08.909496   13300 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0419 18:35:08.909496   13300 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0419 18:35:08.909496   13300 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0419 18:35:08.909784   13300 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0419 18:35:08.909837   13300 kubeadm.go:309] 
	I0419 18:35:08.910037   13300 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token tn58s3.3j7r2ur6gzwi80gc \
	I0419 18:35:08.910037   13300 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token tn58s3.3j7r2ur6gzwi80gc \
	I0419 18:35:08.910246   13300 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 \
	I0419 18:35:08.910246   13300 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 \
	I0419 18:35:08.910246   13300 command_runner.go:130] > 	--control-plane 
	I0419 18:35:08.910246   13300 kubeadm.go:309] 	--control-plane 
	I0419 18:35:08.910473   13300 kubeadm.go:309] 
	I0419 18:35:08.910795   13300 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0419 18:35:08.910896   13300 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0419 18:35:08.910896   13300 kubeadm.go:309] 
	I0419 18:35:08.911066   13300 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token tn58s3.3j7r2ur6gzwi80gc \
	I0419 18:35:08.911066   13300 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token tn58s3.3j7r2ur6gzwi80gc \
	I0419 18:35:08.911066   13300 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 
	I0419 18:35:08.911066   13300 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 
	I0419 18:35:08.911066   13300 cni.go:84] Creating CNI manager for ""
	I0419 18:35:08.911066   13300 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0419 18:35:08.916048   13300 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0419 18:35:08.936653   13300 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0419 18:35:08.947310   13300 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0419 18:35:08.947367   13300 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0419 18:35:08.947420   13300 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0419 18:35:08.947420   13300 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0419 18:35:08.947456   13300 command_runner.go:130] > Access: 2024-04-20 01:33:15.526829400 +0000
	I0419 18:35:08.947523   13300 command_runner.go:130] > Modify: 2024-04-18 23:25:47.000000000 +0000
	I0419 18:35:08.947523   13300 command_runner.go:130] > Change: 2024-04-19 18:33:07.371000000 +0000
	I0419 18:35:08.947588   13300 command_runner.go:130] >  Birth: -
	I0419 18:35:08.948295   13300 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0419 18:35:08.948359   13300 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0419 18:35:09.005821   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0419 18:35:09.739221   13300 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0419 18:35:09.739328   13300 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0419 18:35:09.739328   13300 command_runner.go:130] > serviceaccount/kindnet created
	I0419 18:35:09.739328   13300 command_runner.go:130] > daemonset.apps/kindnet created
	I0419 18:35:09.739328   13300 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0419 18:35:09.753668   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:09.753668   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-348000 minikube.k8s.io/updated_at=2024_04_19T18_35_09_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=multinode-348000 minikube.k8s.io/primary=true
	I0419 18:35:09.759701   13300 command_runner.go:130] > -16
	I0419 18:35:09.759701   13300 ops.go:34] apiserver oom_adj: -16
	I0419 18:35:09.927914   13300 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0419 18:35:09.940626   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:09.943940   13300 command_runner.go:130] > node/multinode-348000 labeled
	I0419 18:35:10.042265   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:10.457266   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:10.577129   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:10.946041   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:11.066469   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:11.450779   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:11.561731   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:11.940446   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:12.055368   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:12.452197   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:12.556462   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:12.949020   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:13.068012   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:13.457971   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:13.559369   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:13.953244   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:14.061335   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:14.453747   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:14.552733   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:14.946691   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:15.065034   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:15.448075   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:15.555603   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:15.948931   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:16.054110   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:16.447076   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:16.559918   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:16.948492   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:17.052600   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:17.446342   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:17.556656   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:17.954928   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:18.066511   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:18.442948   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:18.550497   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:18.949084   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:19.055453   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:19.443044   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:19.554280   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:19.955131   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:20.058430   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:20.453493   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:20.566123   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:20.957196   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:21.067324   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:21.453613   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:21.558663   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:21.945646   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:22.050407   13300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0419 18:35:22.457380   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 18:35:22.637705   13300 command_runner.go:130] > NAME      SECRETS   AGE
	I0419 18:35:22.637764   13300 command_runner.go:130] > default   0         0s
	I0419 18:35:22.637819   13300 kubeadm.go:1107] duration metric: took 12.8984078s to wait for elevateKubeSystemPrivileges
	W0419 18:35:22.637876   13300 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0419 18:35:22.637876   13300 kubeadm.go:393] duration metric: took 28.6893106s to StartCluster
	I0419 18:35:22.637968   13300 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:35:22.637968   13300 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 18:35:22.640088   13300 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:35:22.641529   13300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0419 18:35:22.641755   13300 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.42.231 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 18:35:22.641697   13300 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0419 18:35:22.641913   13300 addons.go:69] Setting storage-provisioner=true in profile "multinode-348000"
	I0419 18:35:22.642060   13300 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 18:35:22.642117   13300 addons.go:234] Setting addon storage-provisioner=true in "multinode-348000"
	I0419 18:35:22.649109   13300 out.go:177] * Verifying Kubernetes components...
	I0419 18:35:22.641913   13300 addons.go:69] Setting default-storageclass=true in profile "multinode-348000"
	I0419 18:35:22.642185   13300 host.go:66] Checking if "multinode-348000" exists ...
	I0419 18:35:22.650354   13300 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-348000"
	I0419 18:35:22.652265   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:35:22.653098   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:35:22.663961   13300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:35:22.947518   13300 command_runner.go:130] > apiVersion: v1
	I0419 18:35:22.947636   13300 command_runner.go:130] > data:
	I0419 18:35:22.947636   13300 command_runner.go:130] >   Corefile: |
	I0419 18:35:22.947681   13300 command_runner.go:130] >     .:53 {
	I0419 18:35:22.947681   13300 command_runner.go:130] >         errors
	I0419 18:35:22.947681   13300 command_runner.go:130] >         health {
	I0419 18:35:22.947737   13300 command_runner.go:130] >            lameduck 5s
	I0419 18:35:22.947737   13300 command_runner.go:130] >         }
	I0419 18:35:22.947737   13300 command_runner.go:130] >         ready
	I0419 18:35:22.947737   13300 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0419 18:35:22.947737   13300 command_runner.go:130] >            pods insecure
	I0419 18:35:22.947737   13300 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0419 18:35:22.947737   13300 command_runner.go:130] >            ttl 30
	I0419 18:35:22.947737   13300 command_runner.go:130] >         }
	I0419 18:35:22.947737   13300 command_runner.go:130] >         prometheus :9153
	I0419 18:35:22.947737   13300 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0419 18:35:22.947877   13300 command_runner.go:130] >            max_concurrent 1000
	I0419 18:35:22.947877   13300 command_runner.go:130] >         }
	I0419 18:35:22.947877   13300 command_runner.go:130] >         cache 30
	I0419 18:35:22.947965   13300 command_runner.go:130] >         loop
	I0419 18:35:22.947965   13300 command_runner.go:130] >         reload
	I0419 18:35:22.948014   13300 command_runner.go:130] >         loadbalance
	I0419 18:35:22.948014   13300 command_runner.go:130] >     }
	I0419 18:35:22.948014   13300 command_runner.go:130] > kind: ConfigMap
	I0419 18:35:22.948014   13300 command_runner.go:130] > metadata:
	I0419 18:35:22.948014   13300 command_runner.go:130] >   creationTimestamp: "2024-04-20T01:35:08Z"
	I0419 18:35:22.948094   13300 command_runner.go:130] >   name: coredns
	I0419 18:35:22.948155   13300 command_runner.go:130] >   namespace: kube-system
	I0419 18:35:22.948155   13300 command_runner.go:130] >   resourceVersion: "228"
	I0419 18:35:22.948155   13300 command_runner.go:130] >   uid: c6cff7b4-57d5-4669-b293-b4a4ae611c8a
	I0419 18:35:22.948555   13300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.32.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0419 18:35:23.066808   13300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 18:35:23.425660   13300 command_runner.go:130] > configmap/coredns replaced
	I0419 18:35:23.431860   13300 start.go:946] {"host.minikube.internal": 172.19.32.1} host record injected into CoreDNS's ConfigMap
	I0419 18:35:23.433329   13300 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 18:35:23.433504   13300 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 18:35:23.434158   13300 kapi.go:59] client config for multinode-348000: &rest.Config{Host:"https://172.19.42.231:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-348000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-348000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c35620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 18:35:23.434643   13300 kapi.go:59] client config for multinode-348000: &rest.Config{Host:"https://172.19.42.231:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-348000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-348000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c35620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 18:35:23.436050   13300 cert_rotation.go:137] Starting client certificate rotation controller
	I0419 18:35:23.436679   13300 node_ready.go:35] waiting up to 6m0s for node "multinode-348000" to be "Ready" ...
	I0419 18:35:23.436679   13300 round_trippers.go:463] GET https://172.19.42.231:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0419 18:35:23.436679   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:23.436679   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:23.436679   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:23.436679   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:23.436679   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:23.437219   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:23.437272   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:23.452480   13300 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0419 18:35:23.452480   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:23.452480   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:23.452579   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:23.452579   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:23.452579   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:23.452579   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:23 GMT
	I0419 18:35:23.452579   13300 round_trippers.go:580]     Audit-Id: bd4cff9c-2089-4a82-854e-2331f24b89aa
	I0419 18:35:23.452967   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:23.454613   13300 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0419 18:35:23.455445   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:23.455445   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:23.455445   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:23.455517   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:23.455517   13300 round_trippers.go:580]     Content-Length: 291
	I0419 18:35:23.455517   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:23 GMT
	I0419 18:35:23.455517   13300 round_trippers.go:580]     Audit-Id: 91440054-d7d3-41fb-a429-65e0d87af678
	I0419 18:35:23.455517   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:23.455517   13300 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f84db346-7825-4031-beee-99dfef80b876","resourceVersion":"356","creationTimestamp":"2024-04-20T01:35:08Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0419 18:35:23.456015   13300 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f84db346-7825-4031-beee-99dfef80b876","resourceVersion":"356","creationTimestamp":"2024-04-20T01:35:08Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0419 18:35:23.456187   13300 round_trippers.go:463] PUT https://172.19.42.231:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0419 18:35:23.456187   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:23.456260   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:23.456260   13300 round_trippers.go:473]     Content-Type: application/json
	I0419 18:35:23.456322   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:23.465953   13300 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0419 18:35:23.465953   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:23.465953   13300 round_trippers.go:580]     Content-Length: 291
	I0419 18:35:23.465953   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:23 GMT
	I0419 18:35:23.465953   13300 round_trippers.go:580]     Audit-Id: e5809b1b-5052-4afe-8a27-0d720fba4b5f
	I0419 18:35:23.477811   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:23.477811   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:23.477811   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:23.477811   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:23.477991   13300 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f84db346-7825-4031-beee-99dfef80b876","resourceVersion":"358","creationTimestamp":"2024-04-20T01:35:08Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0419 18:35:23.947859   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:23.947859   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:23.947859   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:23.947859   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:23.948120   13300 round_trippers.go:463] GET https://172.19.42.231:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0419 18:35:23.948120   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:23.948120   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:23.948241   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:23.953790   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:23.953951   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:23.953951   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:23.953951   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:23 GMT
	I0419 18:35:23.953951   13300 round_trippers.go:580]     Audit-Id: b573e0eb-a57a-40ab-9afd-97075062706d
	I0419 18:35:23.953951   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:23.953951   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:23.953951   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:23.953951   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:23.953951   13300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:35:23.953951   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:23.954497   13300 round_trippers.go:580]     Audit-Id: c99147a2-f4d6-408b-9cab-0b8fad476e2e
	I0419 18:35:23.954497   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:23.954497   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:23.954497   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:23.954497   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:23.954497   13300 round_trippers.go:580]     Content-Length: 291
	I0419 18:35:23.954603   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:23 GMT
	I0419 18:35:23.954731   13300 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f84db346-7825-4031-beee-99dfef80b876","resourceVersion":"368","creationTimestamp":"2024-04-20T01:35:08Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0419 18:35:23.954957   13300 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-348000" context rescaled to 1 replicas
	I0419 18:35:24.443371   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:24.443499   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:24.443568   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:24.443568   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:24.443887   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:24.443887   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:24.443887   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:24 GMT
	I0419 18:35:24.447420   13300 round_trippers.go:580]     Audit-Id: d1f595b4-2d18-4d6b-b030-611596150445
	I0419 18:35:24.447420   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:24.447420   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:24.447420   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:24.447420   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:24.447582   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:24.829934   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:35:24.837022   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:35:24.840616   13300 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 18:35:24.846070   13300 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 18:35:24.846070   13300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0419 18:35:24.846070   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:35:24.847429   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:35:24.847429   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:35:24.848687   13300 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 18:35:24.849119   13300 kapi.go:59] client config for multinode-348000: &rest.Config{Host:"https://172.19.42.231:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-348000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-348000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c35620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 18:35:24.850164   13300 addons.go:234] Setting addon default-storageclass=true in "multinode-348000"
	I0419 18:35:24.850272   13300 host.go:66] Checking if "multinode-348000" exists ...
	I0419 18:35:24.850526   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:35:24.948577   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:24.948577   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:24.948577   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:24.948577   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:24.954142   13300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:35:24.954142   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:24.954142   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:24.954142   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:24 GMT
	I0419 18:35:24.954142   13300 round_trippers.go:580]     Audit-Id: 791aeb47-d8e9-42f7-813e-a9045828373d
	I0419 18:35:24.954142   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:24.954142   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:24.954142   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:24.958153   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:25.454935   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:25.455050   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:25.455050   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:25.455125   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:25.455630   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:25.458428   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:25.458428   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:25.458491   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:25.458491   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:25 GMT
	I0419 18:35:25.458556   13300 round_trippers.go:580]     Audit-Id: 415a946a-4c61-4058-be60-29906d54b66a
	I0419 18:35:25.458556   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:25.458556   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:25.458960   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:25.459599   13300 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:35:25.947058   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:25.947058   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:25.947127   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:25.947127   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:25.951747   13300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:35:25.951747   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:25.951747   13300 round_trippers.go:580]     Audit-Id: efe2632b-ab3d-4370-90a3-2873eb6f877c
	I0419 18:35:25.951747   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:25.951747   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:25.951747   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:25.951747   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:25.951747   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:25 GMT
	I0419 18:35:25.951747   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:26.451881   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:26.452118   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:26.452118   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:26.452118   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:26.452478   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:26.455747   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:26.455747   13300 round_trippers.go:580]     Audit-Id: ea952198-b13b-482f-937a-c88f915d022c
	I0419 18:35:26.455810   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:26.455810   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:26.455810   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:26.455810   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:26.455810   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:26 GMT
	I0419 18:35:26.456072   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:26.937448   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:26.937448   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:26.937448   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:26.937448   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:26.941494   13300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:35:26.944376   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:26.944376   13300 round_trippers.go:580]     Audit-Id: 23ef858b-855f-41dd-be41-d59134f6d3ae
	I0419 18:35:26.944376   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:26.944376   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:26.944442   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:26.944442   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:26.944494   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:26 GMT
	I0419 18:35:26.944693   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:27.041161   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:35:27.041334   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:35:27.041522   13300 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0419 18:35:27.041599   13300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0419 18:35:27.041599   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:35:27.044029   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:35:27.044029   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:35:27.044118   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:35:27.443917   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:27.443917   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:27.443917   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:27.443917   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:27.446183   13300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:35:27.448757   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:27.448757   13300 round_trippers.go:580]     Audit-Id: 30523eb5-26d2-4079-bd3b-5afef2bcf7c8
	I0419 18:35:27.448757   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:27.448757   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:27.448757   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:27.448757   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:27.448757   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:27 GMT
	I0419 18:35:27.449826   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:27.939815   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:27.939815   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:27.939815   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:27.939815   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:27.946275   13300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:35:27.946350   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:27.946350   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:27 GMT
	I0419 18:35:27.946417   13300 round_trippers.go:580]     Audit-Id: c652e1be-d8c3-4a4f-9420-048a274b5ef6
	I0419 18:35:27.946417   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:27.946417   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:27.946417   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:27.946417   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:27.946735   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:27.947427   13300 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:35:28.450017   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:28.450366   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:28.450366   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:28.450366   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:28.467096   13300 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0419 18:35:28.467096   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:28.467096   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:28.467096   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:28 GMT
	I0419 18:35:28.469549   13300 round_trippers.go:580]     Audit-Id: 72301025-aaad-49e1-91ed-65ee86726aab
	I0419 18:35:28.469549   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:28.469549   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:28.469549   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:28.470130   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:28.950141   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:28.950212   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:28.950212   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:28.950212   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:28.958600   13300 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 18:35:28.958600   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:28.958600   13300 round_trippers.go:580]     Audit-Id: 886775f9-7e2a-460e-af78-38b2d1895875
	I0419 18:35:28.958600   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:28.958600   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:28.958600   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:28.958600   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:28.958600   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:28 GMT
	I0419 18:35:28.959265   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:29.251302   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:35:29.251302   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:35:29.251302   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:35:29.440202   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:29.440266   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:29.440328   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:29.440328   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:29.449078   13300 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 18:35:29.449078   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:29.449078   13300 round_trippers.go:580]     Audit-Id: e6c0ce97-e96b-454f-aa43-7f93e431ad4a
	I0419 18:35:29.449078   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:29.449078   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:29.449078   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:29.449078   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:29.449078   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:29 GMT
	I0419 18:35:29.450373   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:29.762849   13300 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:35:29.763764   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:35:29.763857   13300 sshutil.go:53] new ssh client: &{IP:172.19.42.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\id_rsa Username:docker}
	I0419 18:35:29.895128   13300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 18:35:29.951349   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:29.951349   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:29.951349   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:29.951349   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:29.952069   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:29.952069   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:29.952069   13300 round_trippers.go:580]     Audit-Id: aa5c7096-8435-449b-b638-bd24bfcfaad0
	I0419 18:35:29.952069   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:29.952069   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:29.952069   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:29.952069   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:29.952069   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:29 GMT
	I0419 18:35:29.955715   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:29.956404   13300 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:35:30.446327   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:30.446327   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:30.446327   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:30.446327   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:30.446932   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:30.446932   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:30.446932   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:30.446932   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:30.451494   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:30.451494   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:30.451494   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:30 GMT
	I0419 18:35:30.451494   13300 round_trippers.go:580]     Audit-Id: 60f4e217-df0d-4794-97b7-5e7cc6c77761
	I0419 18:35:30.451819   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:30.517648   13300 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0419 18:35:30.519546   13300 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0419 18:35:30.519639   13300 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0419 18:35:30.519639   13300 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0419 18:35:30.519704   13300 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0419 18:35:30.519704   13300 command_runner.go:130] > pod/storage-provisioner created
	I0419 18:35:30.937066   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:30.937488   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:30.937488   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:30.937488   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:30.941552   13300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 18:35:30.941552   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:30.941552   13300 round_trippers.go:580]     Audit-Id: c584b1ea-9963-453f-aaf5-f3e88a1e609a
	I0419 18:35:30.941552   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:30.941552   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:30.941552   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:30.941552   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:30.941552   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:30 GMT
	I0419 18:35:30.941552   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:31.442928   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:31.442928   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:31.442928   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:31.442928   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:31.443467   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:31.443467   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:31.443467   13300 round_trippers.go:580]     Audit-Id: 0c036982-aca0-4bf4-97c5-40fb8c88b65e
	I0419 18:35:31.446731   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:31.446731   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:31.446731   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:31.446731   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:31.446731   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:31 GMT
	I0419 18:35:31.447108   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:31.827082   13300 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:35:31.839124   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:35:31.839341   13300 sshutil.go:53] new ssh client: &{IP:172.19.42.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\id_rsa Username:docker}
	I0419 18:35:31.945853   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:31.945912   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:31.945974   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:31.945974   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:31.954900   13300 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 18:35:31.955381   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:31.955381   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:31.955441   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:31.955514   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:31.955514   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:31.955567   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:31 GMT
	I0419 18:35:31.955567   13300 round_trippers.go:580]     Audit-Id: 93aafc22-11bc-4ad0-82e8-382ec3fb0a87
	I0419 18:35:31.955611   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:31.994551   13300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0419 18:35:32.237115   13300 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0419 18:35:32.237370   13300 round_trippers.go:463] GET https://172.19.42.231:8443/apis/storage.k8s.io/v1/storageclasses
	I0419 18:35:32.237472   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:32.237472   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:32.237472   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:32.239376   13300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 18:35:32.239376   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:32.239376   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:32.239376   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:32.239376   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:32.239376   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:32.239376   13300 round_trippers.go:580]     Content-Length: 1273
	I0419 18:35:32.239376   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:32 GMT
	I0419 18:35:32.239376   13300 round_trippers.go:580]     Audit-Id: e04ae7df-0053-41cc-8f77-b051b4484ece
	I0419 18:35:32.239376   13300 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"394"},"items":[{"metadata":{"name":"standard","uid":"8f272017-ba8a-4112-9343-779f14c8be5d","resourceVersion":"394","creationTimestamp":"2024-04-20T01:35:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-20T01:35:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0419 18:35:32.242602   13300 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"8f272017-ba8a-4112-9343-779f14c8be5d","resourceVersion":"394","creationTimestamp":"2024-04-20T01:35:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-20T01:35:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0419 18:35:32.242602   13300 round_trippers.go:463] PUT https://172.19.42.231:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0419 18:35:32.242602   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:32.242787   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:32.242787   13300 round_trippers.go:473]     Content-Type: application/json
	I0419 18:35:32.242787   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:32.246711   13300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:35:32.246711   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:32.246711   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:32 GMT
	I0419 18:35:32.246998   13300 round_trippers.go:580]     Audit-Id: 592c656a-40dc-4cc6-a0b0-cdf24313638a
	I0419 18:35:32.246998   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:32.246998   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:32.246998   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:32.246998   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:32.246998   13300 round_trippers.go:580]     Content-Length: 1220
	I0419 18:35:32.247164   13300 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"8f272017-ba8a-4112-9343-779f14c8be5d","resourceVersion":"394","creationTimestamp":"2024-04-20T01:35:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-20T01:35:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0419 18:35:32.288567   13300 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0419 18:35:32.296715   13300 addons.go:505] duration metric: took 9.6548885s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0419 18:35:32.440686   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:32.440686   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:32.440686   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:32.440686   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:32.441467   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:32.441467   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:32.441467   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:32 GMT
	I0419 18:35:32.441467   13300 round_trippers.go:580]     Audit-Id: 2e440c1d-0c08-43c1-a0c3-102cc3c69d48
	I0419 18:35:32.441467   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:32.441467   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:32.441467   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:32.441467   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:32.445269   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:32.445814   13300 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:35:32.945948   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:32.946047   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:32.946047   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:32.946047   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:32.951214   13300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:35:32.951214   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:32.951304   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:32.951304   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:32.951304   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:32 GMT
	I0419 18:35:32.951304   13300 round_trippers.go:580]     Audit-Id: 2cc8bbf0-8ca2-4b6e-b709-bab3a563506c
	I0419 18:35:32.951304   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:32.951304   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:32.951584   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:33.445898   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:33.445981   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:33.445981   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:33.445981   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:33.446251   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:33.446251   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:33.446251   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:33.446251   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:33.450788   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:33.450788   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:33 GMT
	I0419 18:35:33.450788   13300 round_trippers.go:580]     Audit-Id: d5e2841a-6b02-495a-844f-23e6e4ce4f09
	I0419 18:35:33.450788   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:33.451038   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:33.943738   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:33.943959   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:33.943959   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:33.943959   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:33.944281   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:33.944281   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:33.944281   13300 round_trippers.go:580]     Audit-Id: e55a3882-938c-403e-8811-90e04b9f2917
	I0419 18:35:33.944281   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:33.948214   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:33.948214   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:33.948214   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:33.948214   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:33 GMT
	I0419 18:35:33.948439   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:34.451681   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:34.451889   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:34.451889   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:34.451889   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:34.452198   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:34.455460   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:34.455460   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:34.455460   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:34.455460   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:34 GMT
	I0419 18:35:34.455460   13300 round_trippers.go:580]     Audit-Id: dbd67fa2-ee11-4ff0-803e-a31e3b281193
	I0419 18:35:34.455460   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:34.455460   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:34.455780   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:34.455808   13300 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:35:34.938770   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:34.938847   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:34.938847   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:34.938847   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:34.939170   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:34.943172   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:34.943172   13300 round_trippers.go:580]     Audit-Id: adf01a4c-875f-4ddb-ba44-f80b2f94fc78
	I0419 18:35:34.943172   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:34.943172   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:34.943172   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:34.943285   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:34.943285   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:34 GMT
	I0419 18:35:34.943795   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:35.444566   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:35.444566   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:35.444667   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:35.444667   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:35.444996   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:35.444996   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:35.444996   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:35 GMT
	I0419 18:35:35.444996   13300 round_trippers.go:580]     Audit-Id: 677320e9-d9bc-49c9-8753-254a3faa1e96
	I0419 18:35:35.444996   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:35.444996   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:35.449153   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:35.449153   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:35.449878   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:35.964318   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:35.964418   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:35.964418   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:35.964418   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:35.965197   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:35.965197   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:35.965197   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:35.965197   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:35.965197   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:35.965197   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:35 GMT
	I0419 18:35:35.965197   13300 round_trippers.go:580]     Audit-Id: 0af5d9de-7858-4137-b655-d5c53204ff9b
	I0419 18:35:35.965197   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:35.968872   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:36.438121   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:36.438307   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:36.438307   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:36.438307   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:36.442809   13300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:35:36.442929   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:36.442929   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:36.442929   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:36.442929   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:36 GMT
	I0419 18:35:36.442929   13300 round_trippers.go:580]     Audit-Id: 44a36980-95e3-4223-a20e-f1fd84747924
	I0419 18:35:36.442929   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:36.442929   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:36.443350   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"321","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0419 18:35:36.950145   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:36.950179   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:36.950179   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:36.950179   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:36.955754   13300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:35:36.955754   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:36.957134   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:36.957134   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:36 GMT
	I0419 18:35:36.957134   13300 round_trippers.go:580]     Audit-Id: 11317500-3940-45e9-8f5e-d1bdf2aa47f2
	I0419 18:35:36.957134   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:36.957134   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:36.957134   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:36.957407   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"402","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0419 18:35:36.957646   13300 node_ready.go:49] node "multinode-348000" has status "Ready":"True"
	I0419 18:35:36.957646   13300 node_ready.go:38] duration metric: took 13.5209375s for node "multinode-348000" to be "Ready" ...
	I0419 18:35:36.957646   13300 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 18:35:36.957646   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods
	I0419 18:35:36.957646   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:36.957646   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:36.957646   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:36.958324   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:36.962112   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:36.962112   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:36.962112   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:36.962112   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:36 GMT
	I0419 18:35:36.962112   13300 round_trippers.go:580]     Audit-Id: 1355b7df-e197-47a7-9835-4a69ef0b46f9
	I0419 18:35:36.962112   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:36.962112   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:36.963567   13300 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"406","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56337 chars]
	I0419 18:35:36.968211   13300 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace to be "Ready" ...
	I0419 18:35:36.968760   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:35:36.968815   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:36.968815   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:36.968815   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:36.976148   13300 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 18:35:36.976148   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:36.976148   13300 round_trippers.go:580]     Audit-Id: f7e0e47b-e305-4c2a-a03d-ab808389f3fb
	I0419 18:35:36.981921   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:36.981921   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:36.981921   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:36.981921   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:36.981921   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:36 GMT
	I0419 18:35:36.982234   13300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"406","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0419 18:35:36.982307   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:36.982307   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:36.982844   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:36.982844   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:36.983499   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:36.983499   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:36.983499   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:36.986013   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:36.986013   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:36.986013   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:36.986013   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:36 GMT
	I0419 18:35:36.986013   13300 round_trippers.go:580]     Audit-Id: 16dda2e5-a463-4576-bfdd-a871eb972ecd
	I0419 18:35:36.986360   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"402","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0419 18:35:37.486159   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:35:37.486218   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:37.486218   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:37.486275   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:37.486379   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:37.490412   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:37.490412   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:37.490412   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:37.490412   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:37.490412   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:37 GMT
	I0419 18:35:37.490412   13300 round_trippers.go:580]     Audit-Id: e09aaac7-4e1a-48a7-9b01-58f67d38cf85
	I0419 18:35:37.490412   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:37.490640   13300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"406","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0419 18:35:37.491512   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:37.491596   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:37.491596   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:37.491596   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:37.491911   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:37.491911   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:37.491911   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:37 GMT
	I0419 18:35:37.491911   13300 round_trippers.go:580]     Audit-Id: 70f3705e-047c-44e8-8734-d73490f5ce5f
	I0419 18:35:37.494639   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:37.494639   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:37.494639   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:37.494639   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:37.494989   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"402","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0419 18:35:37.977578   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:35:37.977634   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:37.977634   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:37.977634   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:37.982373   13300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:35:37.982373   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:37.982373   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:37.982373   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:37.982940   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:37.982940   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:37.982940   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:37 GMT
	I0419 18:35:37.982940   13300 round_trippers.go:580]     Audit-Id: 523d22f3-204f-4753-baac-64e2045bcd0a
	I0419 18:35:37.983231   13300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"406","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0419 18:35:37.984091   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:37.984091   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:37.984142   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:37.984142   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:37.994038   13300 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0419 18:35:37.994038   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:37.994038   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:37 GMT
	I0419 18:35:37.994038   13300 round_trippers.go:580]     Audit-Id: 344f43c0-9017-4857-9a57-a5d8d228e4e7
	I0419 18:35:37.994038   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:37.994038   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:37.994038   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:37.994038   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:37.994470   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"402","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0419 18:35:38.477194   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:35:38.477194   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:38.477194   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:38.477194   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:38.478375   13300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 18:35:38.481914   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:38.482032   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:38.482032   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:38.482032   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:38 GMT
	I0419 18:35:38.482032   13300 round_trippers.go:580]     Audit-Id: 3e2b31e1-fef7-4bf5-bb29-985c3f3e6380
	I0419 18:35:38.482032   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:38.482032   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:38.482281   13300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"406","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0419 18:35:38.483163   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:38.483224   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:38.483224   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:38.483224   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:38.483464   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:38.483464   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:38.483464   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:38.483464   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:38.483464   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:38.485594   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:38.485641   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:38 GMT
	I0419 18:35:38.485641   13300 round_trippers.go:580]     Audit-Id: 81daddc5-33ce-4b8b-a1ed-a6be01a55172
	I0419 18:35:38.485859   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"402","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0419 18:35:38.969575   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:35:38.969575   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:38.969575   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:38.969575   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:38.980011   13300 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0419 18:35:38.980011   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:38.980011   13300 round_trippers.go:580]     Audit-Id: ece13b40-9d70-4d19-b8f3-10fec0176723
	I0419 18:35:38.980011   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:38.980011   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:38.982437   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:38.982437   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:38.982437   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:38 GMT
	I0419 18:35:38.982699   13300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"406","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0419 18:35:38.982699   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:38.982699   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:38.982699   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:38.982699   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:38.985832   13300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:35:38.985832   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:38.985832   13300 round_trippers.go:580]     Audit-Id: daefe8de-47db-4632-b894-f199531ff5bf
	I0419 18:35:38.985832   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:38.985832   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:38.985832   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:38.987467   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:38.987467   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:38 GMT
	I0419 18:35:38.987756   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"402","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0419 18:35:38.988188   13300 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:35:39.471225   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:35:39.471225   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:39.471225   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:39.471308   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:39.471539   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:39.471539   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:39.471539   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:39.471539   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:39.471539   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:39 GMT
	I0419 18:35:39.471539   13300 round_trippers.go:580]     Audit-Id: 179a68db-a317-46d2-940d-3c73e8de4f9f
	I0419 18:35:39.471539   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:39.471539   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:39.475320   13300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"424","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0419 18:35:39.476221   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:39.476300   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:39.476300   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:39.476300   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:39.476613   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:39.478940   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:39.478940   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:39.478940   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:39.478940   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:39 GMT
	I0419 18:35:39.478940   13300 round_trippers.go:580]     Audit-Id: 7cdf87e6-e36c-4a33-8c56-522c2158ba1b
	I0419 18:35:39.478940   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:39.478940   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:39.479474   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"420","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0419 18:35:39.480014   13300 pod_ready.go:92] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"True"
	I0419 18:35:39.480073   13300 pod_ready.go:81] duration metric: took 2.5118564s for pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace to be "Ready" ...
	I0419 18:35:39.480073   13300 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:35:39.480193   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-348000
	I0419 18:35:39.480253   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:39.480253   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:39.480280   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:39.482044   13300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 18:35:39.482044   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:39.482044   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:39.482044   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:39.482044   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:39.483639   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:39.483639   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:39 GMT
	I0419 18:35:39.483639   13300 round_trippers.go:580]     Audit-Id: a62913e6-43c4-4327-a76d-a2e3329bd07a
	I0419 18:35:39.483820   13300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-348000","namespace":"kube-system","uid":"af4afa87-c484-4b73-9a4d-e86ddcd90049","resourceVersion":"380","creationTimestamp":"2024-04-20T01:35:08Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.42.231:2379","kubernetes.io/config.hash":"8fef0b92f87f018a58c19217fdf5d6e1","kubernetes.io/config.mirror":"8fef0b92f87f018a58c19217fdf5d6e1","kubernetes.io/config.seen":"2024-04-20T01:35:08.321891557Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0419 18:35:39.483885   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:39.483885   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:39.483885   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:39.484420   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:39.484682   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:39.484682   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:39.484682   13300 round_trippers.go:580]     Audit-Id: 7003c8c1-f137-4691-a90e-d516cd51ebb7
	I0419 18:35:39.484682   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:39.484682   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:39.484682   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:39.484682   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:39.484682   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:39 GMT
	I0419 18:35:39.487701   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"420","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0419 18:35:39.488421   13300 pod_ready.go:92] pod "etcd-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 18:35:39.488454   13300 pod_ready.go:81] duration metric: took 8.3223ms for pod "etcd-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:35:39.488513   13300 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:35:39.488568   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-348000
	I0419 18:35:39.488568   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:39.488568   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:39.488677   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:39.494059   13300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:35:39.494127   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:39.494127   13300 round_trippers.go:580]     Audit-Id: 2d0de477-0fc1-49ad-bac6-965d8d3873e5
	I0419 18:35:39.494127   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:39.494127   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:39.494192   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:39.494192   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:39.494192   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:39 GMT
	I0419 18:35:39.494766   13300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-348000","namespace":"kube-system","uid":"18f5e677-6a96-47ee-9f61-60ab9445eb92","resourceVersion":"383","creationTimestamp":"2024-04-20T01:35:08Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.42.231:8443","kubernetes.io/config.hash":"89aa15d5f8e328791151d96100a36918","kubernetes.io/config.mirror":"89aa15d5f8e328791151d96100a36918","kubernetes.io/config.seen":"2024-04-20T01:35:08.321896559Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0419 18:35:39.495338   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:39.495338   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:39.495338   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:39.495454   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:39.498604   13300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:35:39.498604   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:39.498604   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:39 GMT
	I0419 18:35:39.498604   13300 round_trippers.go:580]     Audit-Id: 1eae98a3-0703-4b39-81cc-d1a1bcae0297
	I0419 18:35:39.498604   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:39.498604   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:39.498604   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:39.498604   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:39.498604   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"420","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0419 18:35:39.498604   13300 pod_ready.go:92] pod "kube-apiserver-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 18:35:39.498604   13300 pod_ready.go:81] duration metric: took 10.0912ms for pod "kube-apiserver-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:35:39.498604   13300 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:35:39.498604   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-348000
	I0419 18:35:39.500176   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:39.500176   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:39.500176   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:39.503104   13300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:35:39.503104   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:39.503104   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:39 GMT
	I0419 18:35:39.503104   13300 round_trippers.go:580]     Audit-Id: 924dbc33-e50e-45ea-ac07-425bd773f3fb
	I0419 18:35:39.503104   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:39.503104   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:39.503104   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:39.503104   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:39.503590   13300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-348000","namespace":"kube-system","uid":"299bb088-9795-4452-87a8-5e96bcacedde","resourceVersion":"381","creationTimestamp":"2024-04-20T01:35:08Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"30aa2729d0c65b9f89e1ae2d151edd9b","kubernetes.io/config.mirror":"30aa2729d0c65b9f89e1ae2d151edd9b","kubernetes.io/config.seen":"2024-04-20T01:35:08.321898260Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0419 18:35:39.504261   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:39.504261   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:39.504261   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:39.504343   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:39.506632   13300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:35:39.507556   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:39.507556   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:39 GMT
	I0419 18:35:39.507556   13300 round_trippers.go:580]     Audit-Id: 3e50e9a4-17a4-4a9c-a9d7-4baca7abf308
	I0419 18:35:39.507556   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:39.507556   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:39.507556   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:39.507612   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:39.507750   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"420","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0419 18:35:39.507750   13300 pod_ready.go:92] pod "kube-controller-manager-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 18:35:39.507750   13300 pod_ready.go:81] duration metric: took 9.1462ms for pod "kube-controller-manager-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:35:39.507750   13300 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kj76x" in "kube-system" namespace to be "Ready" ...
	I0419 18:35:39.507750   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kj76x
	I0419 18:35:39.507750   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:39.507750   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:39.507750   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:39.510135   13300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:35:39.511504   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:39.511504   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:39.511504   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:39.511504   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:39.511560   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:39.511560   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:39 GMT
	I0419 18:35:39.511560   13300 round_trippers.go:580]     Audit-Id: 29851fba-f6e8-454d-b849-426e31de2735
	I0419 18:35:39.511654   13300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kj76x","generateName":"kube-proxy-","namespace":"kube-system","uid":"274342c4-c21f-4279-b0ea-743d8e2c1463","resourceVersion":"377","creationTimestamp":"2024-04-20T01:35:22Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0419 18:35:39.512395   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:39.512422   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:39.512422   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:39.512459   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:39.514853   13300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:35:39.514853   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:39.514853   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:39.514853   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:39.514853   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:39.514853   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:39 GMT
	I0419 18:35:39.514853   13300 round_trippers.go:580]     Audit-Id: 405ffe50-3e4d-4a39-ac73-a76ee5dcb71f
	I0419 18:35:39.514853   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:39.515218   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"420","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0419 18:35:39.515663   13300 pod_ready.go:92] pod "kube-proxy-kj76x" in "kube-system" namespace has status "Ready":"True"
	I0419 18:35:39.515663   13300 pod_ready.go:81] duration metric: took 7.9125ms for pod "kube-proxy-kj76x" in "kube-system" namespace to be "Ready" ...
	I0419 18:35:39.515663   13300 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:35:39.674710   13300 request.go:629] Waited for 158.8196ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348000
	I0419 18:35:39.674924   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348000
	I0419 18:35:39.674924   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:39.674924   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:39.674924   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:39.675601   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:39.675601   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:39.679072   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:39.679072   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:39.679072   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:39.679072   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:39 GMT
	I0419 18:35:39.679072   13300 round_trippers.go:580]     Audit-Id: f7b839d5-2c60-4d49-b744-a38538d2e76a
	I0419 18:35:39.679072   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:39.679261   13300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-348000","namespace":"kube-system","uid":"000cfafe-a513-4738-9de2-3c25244b72be","resourceVersion":"382","creationTimestamp":"2024-04-20T01:35:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"92813b2aed63b63058d3fd06709fa24e","kubernetes.io/config.mirror":"92813b2aed63b63058d3fd06709fa24e","kubernetes.io/config.seen":"2024-04-20T01:35:08.321899460Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0419 18:35:39.884364   13300 request.go:629] Waited for 204.4434ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:39.884801   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:35:39.884801   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:39.884801   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:39.884801   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:39.885275   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:39.885275   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:39.885275   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:39.885275   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:39.885275   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:39.885275   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:39.885275   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:39 GMT
	I0419 18:35:39.888717   13300 round_trippers.go:580]     Audit-Id: 6dc37514-4359-4677-bffa-9c78b7b3945e
	I0419 18:35:39.888916   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"420","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0419 18:35:39.889038   13300 pod_ready.go:92] pod "kube-scheduler-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 18:35:39.889038   13300 pod_ready.go:81] duration metric: took 373.3749ms for pod "kube-scheduler-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:35:39.889038   13300 pod_ready.go:38] duration metric: took 2.9313854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 18:35:39.889038   13300 api_server.go:52] waiting for apiserver process to appear ...
	I0419 18:35:39.900719   13300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 18:35:39.925920   13300 command_runner.go:130] > 2024
	I0419 18:35:39.929163   13300 api_server.go:72] duration metric: took 17.2872633s to wait for apiserver process to appear ...
	I0419 18:35:39.929163   13300 api_server.go:88] waiting for apiserver healthz status ...
	I0419 18:35:39.929299   13300 api_server.go:253] Checking apiserver healthz at https://172.19.42.231:8443/healthz ...
	I0419 18:35:39.935202   13300 api_server.go:279] https://172.19.42.231:8443/healthz returned 200:
	ok
	I0419 18:35:39.936923   13300 round_trippers.go:463] GET https://172.19.42.231:8443/version
	I0419 18:35:39.936959   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:39.936959   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:39.936959   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:39.937214   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:39.937214   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:39.937214   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:39.937214   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:39.938751   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:39.938751   13300 round_trippers.go:580]     Content-Length: 263
	I0419 18:35:39.938751   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:39 GMT
	I0419 18:35:39.938751   13300 round_trippers.go:580]     Audit-Id: 41668efc-3fda-4ec8-a91a-bcedfa588b85
	I0419 18:35:39.938751   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:39.938822   13300 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0419 18:35:39.938907   13300 api_server.go:141] control plane version: v1.30.0
	I0419 18:35:39.938907   13300 api_server.go:131] duration metric: took 9.7443ms to wait for apiserver health ...
	I0419 18:35:39.938907   13300 system_pods.go:43] waiting for kube-system pods to appear ...
	I0419 18:35:40.073187   13300 request.go:629] Waited for 134.0461ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods
	I0419 18:35:40.073455   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods
	I0419 18:35:40.073495   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:40.073495   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:40.073495   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:40.074187   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:40.079164   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:40.079164   13300 round_trippers.go:580]     Audit-Id: 0f1123d4-3ee3-4d86-b0b4-6f6ab7e0636c
	I0419 18:35:40.079164   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:40.079164   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:40.079164   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:40.079164   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:40.079164   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:40 GMT
	I0419 18:35:40.080038   13300 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"424","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0419 18:35:40.084462   13300 system_pods.go:59] 8 kube-system pods found
	I0419 18:35:40.084462   13300 system_pods.go:61] "coredns-7db6d8ff4d-7w477" [895ddde9-466d-4abf-b6f4-594847b26c6c] Running
	I0419 18:35:40.084462   13300 system_pods.go:61] "etcd-multinode-348000" [af4afa87-c484-4b73-9a4d-e86ddcd90049] Running
	I0419 18:35:40.084462   13300 system_pods.go:61] "kindnet-s4fsr" [46c91d5e-edfa-4254-a802-148047caeab5] Running
	I0419 18:35:40.084462   13300 system_pods.go:61] "kube-apiserver-multinode-348000" [18f5e677-6a96-47ee-9f61-60ab9445eb92] Running
	I0419 18:35:40.084462   13300 system_pods.go:61] "kube-controller-manager-multinode-348000" [299bb088-9795-4452-87a8-5e96bcacedde] Running
	I0419 18:35:40.084462   13300 system_pods.go:61] "kube-proxy-kj76x" [274342c4-c21f-4279-b0ea-743d8e2c1463] Running
	I0419 18:35:40.084462   13300 system_pods.go:61] "kube-scheduler-multinode-348000" [000cfafe-a513-4738-9de2-3c25244b72be] Running
	I0419 18:35:40.084462   13300 system_pods.go:61] "storage-provisioner" [ffa0cfb9-91fb-4d5b-abe7-11992c731b74] Running
	I0419 18:35:40.084462   13300 system_pods.go:74] duration metric: took 145.5542ms to wait for pod list to return data ...
	I0419 18:35:40.084462   13300 default_sa.go:34] waiting for default service account to be created ...
	I0419 18:35:40.279348   13300 request.go:629] Waited for 194.8857ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.231:8443/api/v1/namespaces/default/serviceaccounts
	I0419 18:35:40.279348   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/default/serviceaccounts
	I0419 18:35:40.279348   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:40.279348   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:40.279348   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:40.282311   13300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:35:40.282311   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:40.282311   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:40.282311   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:40.283636   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:40.283636   13300 round_trippers.go:580]     Content-Length: 261
	I0419 18:35:40.283636   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:40 GMT
	I0419 18:35:40.283636   13300 round_trippers.go:580]     Audit-Id: b64a576a-92c7-4cdd-aaf0-403f03b337d3
	I0419 18:35:40.283636   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:40.283636   13300 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"fd56f1e7-7816-4124-aeed-e48a3ea6b7a7","resourceVersion":"301","creationTimestamp":"2024-04-20T01:35:22Z"}}]}
	I0419 18:35:40.284050   13300 default_sa.go:45] found service account: "default"
	I0419 18:35:40.284127   13300 default_sa.go:55] duration metric: took 199.6653ms for default service account to be created ...
	I0419 18:35:40.284127   13300 system_pods.go:116] waiting for k8s-apps to be running ...
	I0419 18:35:40.472088   13300 request.go:629] Waited for 187.6834ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods
	I0419 18:35:40.472213   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods
	I0419 18:35:40.472213   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:40.472213   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:40.472413   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:40.472831   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:35:40.472831   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:40.472831   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:40 GMT
	I0419 18:35:40.472831   13300 round_trippers.go:580]     Audit-Id: 6c38babf-98ac-4dd3-a302-e469cc5b47bc
	I0419 18:35:40.472831   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:40.472831   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:40.472831   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:40.478530   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:40.479784   13300 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"424","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0419 18:35:40.482278   13300 system_pods.go:86] 8 kube-system pods found
	I0419 18:35:40.482278   13300 system_pods.go:89] "coredns-7db6d8ff4d-7w477" [895ddde9-466d-4abf-b6f4-594847b26c6c] Running
	I0419 18:35:40.482278   13300 system_pods.go:89] "etcd-multinode-348000" [af4afa87-c484-4b73-9a4d-e86ddcd90049] Running
	I0419 18:35:40.482278   13300 system_pods.go:89] "kindnet-s4fsr" [46c91d5e-edfa-4254-a802-148047caeab5] Running
	I0419 18:35:40.482278   13300 system_pods.go:89] "kube-apiserver-multinode-348000" [18f5e677-6a96-47ee-9f61-60ab9445eb92] Running
	I0419 18:35:40.482278   13300 system_pods.go:89] "kube-controller-manager-multinode-348000" [299bb088-9795-4452-87a8-5e96bcacedde] Running
	I0419 18:35:40.482278   13300 system_pods.go:89] "kube-proxy-kj76x" [274342c4-c21f-4279-b0ea-743d8e2c1463] Running
	I0419 18:35:40.482278   13300 system_pods.go:89] "kube-scheduler-multinode-348000" [000cfafe-a513-4738-9de2-3c25244b72be] Running
	I0419 18:35:40.482278   13300 system_pods.go:89] "storage-provisioner" [ffa0cfb9-91fb-4d5b-abe7-11992c731b74] Running
	I0419 18:35:40.482278   13300 system_pods.go:126] duration metric: took 198.15ms to wait for k8s-apps to be running ...
	I0419 18:35:40.482278   13300 system_svc.go:44] waiting for kubelet service to be running ....
	I0419 18:35:40.496424   13300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 18:35:40.522886   13300 system_svc.go:56] duration metric: took 40.6079ms WaitForService to wait for kubelet
	I0419 18:35:40.522886   13300 kubeadm.go:576] duration metric: took 17.880985s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 18:35:40.522886   13300 node_conditions.go:102] verifying NodePressure condition ...
	I0419 18:35:40.685583   13300 request.go:629] Waited for 162.4873ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.231:8443/api/v1/nodes
	I0419 18:35:40.685705   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes
	I0419 18:35:40.685705   13300 round_trippers.go:469] Request Headers:
	I0419 18:35:40.685895   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:35:40.686043   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:35:40.691970   13300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:35:40.692521   13300 round_trippers.go:577] Response Headers:
	I0419 18:35:40.692521   13300 round_trippers.go:580]     Audit-Id: 0588320d-690d-47fb-bec5-94077bc89d05
	I0419 18:35:40.692521   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:35:40.692521   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:35:40.692521   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:35:40.692521   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:35:40.692521   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:35:40 GMT
	I0419 18:35:40.692778   13300 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"420","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5012 chars]
	I0419 18:35:40.693368   13300 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 18:35:40.693368   13300 node_conditions.go:123] node cpu capacity is 2
	I0419 18:35:40.693368   13300 node_conditions.go:105] duration metric: took 170.4818ms to run NodePressure ...
	I0419 18:35:40.693368   13300 start.go:240] waiting for startup goroutines ...
	I0419 18:35:40.693368   13300 start.go:245] waiting for cluster config update ...
	I0419 18:35:40.693536   13300 start.go:254] writing updated cluster config ...
	I0419 18:35:40.697912   13300 out.go:177] 
	I0419 18:35:40.701698   13300 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 18:35:40.708649   13300 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 18:35:40.708649   13300 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\config.json ...
	I0419 18:35:40.715150   13300 out.go:177] * Starting "multinode-348000-m02" worker node in "multinode-348000" cluster
	I0419 18:35:40.717389   13300 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 18:35:40.717491   13300 cache.go:56] Caching tarball of preloaded images
	I0419 18:35:40.717874   13300 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0419 18:35:40.717999   13300 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 18:35:40.717999   13300 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\config.json ...
	I0419 18:35:40.722778   13300 start.go:360] acquireMachinesLock for multinode-348000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 18:35:40.722778   13300 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-348000-m02"
	I0419 18:35:40.722778   13300 start.go:93] Provisioning new machine with config: &{Name:multinode-348000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-348000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.42.231 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0419 18:35:40.723834   13300 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0419 18:35:40.725729   13300 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 18:35:40.727337   13300 start.go:159] libmachine.API.Create for "multinode-348000" (driver="hyperv")
	I0419 18:35:40.727438   13300 client.go:168] LocalClient.Create starting
	I0419 18:35:40.727438   13300 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0419 18:35:40.728073   13300 main.go:141] libmachine: Decoding PEM data...
	I0419 18:35:40.728073   13300 main.go:141] libmachine: Parsing certificate...
	I0419 18:35:40.728073   13300 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0419 18:35:40.728073   13300 main.go:141] libmachine: Decoding PEM data...
	I0419 18:35:40.728073   13300 main.go:141] libmachine: Parsing certificate...
	I0419 18:35:40.728073   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0419 18:35:42.578382   13300 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0419 18:35:42.578382   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:35:42.578382   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0419 18:35:44.276534   13300 main.go:141] libmachine: [stdout =====>] : False
	
	I0419 18:35:44.276534   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:35:44.276750   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0419 18:35:45.757799   13300 main.go:141] libmachine: [stdout =====>] : True
	
	I0419 18:35:45.757799   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:35:45.757799   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0419 18:35:49.291615   13300 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0419 18:35:49.296818   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:35:49.298761   13300 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0419 18:35:49.778238   13300 main.go:141] libmachine: Creating SSH key...
	I0419 18:35:50.018125   13300 main.go:141] libmachine: Creating VM...
	I0419 18:35:50.018125   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0419 18:35:52.854634   13300 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0419 18:35:52.867578   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:35:52.867578   13300 main.go:141] libmachine: Using switch "Default Switch"
	I0419 18:35:52.867763   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0419 18:35:54.656643   13300 main.go:141] libmachine: [stdout =====>] : True
	
	I0419 18:35:54.667329   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:35:54.667329   13300 main.go:141] libmachine: Creating VHD
	I0419 18:35:54.667329   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0419 18:35:58.333796   13300 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F57C3D71-85DB-4339-A4FD-5125B2065C57
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0419 18:35:58.333889   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:35:58.333889   13300 main.go:141] libmachine: Writing magic tar header
	I0419 18:35:58.333889   13300 main.go:141] libmachine: Writing SSH key tar header
	I0419 18:35:58.343799   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0419 18:36:01.509723   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:36:01.522319   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:01.522381   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\disk.vhd' -SizeBytes 20000MB
	I0419 18:36:04.011966   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:36:04.012218   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:04.012326   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-348000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0419 18:36:07.559962   13300 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-348000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0419 18:36:07.559962   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:07.559962   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-348000-m02 -DynamicMemoryEnabled $false
	I0419 18:36:09.725138   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:36:09.725138   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:09.725138   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-348000-m02 -Count 2
	I0419 18:36:11.845070   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:36:11.845070   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:11.857652   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-348000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\boot2docker.iso'
	I0419 18:36:14.373163   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:36:14.377938   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:14.378028   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-348000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\disk.vhd'
	I0419 18:36:16.963775   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:36:16.978178   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:16.978178   13300 main.go:141] libmachine: Starting VM...
	I0419 18:36:16.978178   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-348000-m02
	I0419 18:36:19.984884   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:36:19.984884   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:19.984884   13300 main.go:141] libmachine: Waiting for host to start...
	I0419 18:36:19.990572   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:36:22.158809   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:36:22.158809   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:22.170398   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:36:24.675032   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:36:24.675032   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:25.680928   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:36:27.806962   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:36:27.806962   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:27.815762   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:36:30.292336   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:36:30.292336   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:31.297617   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:36:33.433659   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:36:33.433659   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:33.443951   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:36:35.897487   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:36:35.897487   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:36.901506   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:36:39.002512   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:36:39.014709   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:39.014709   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:36:41.437291   13300 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:36:41.450185   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:42.453992   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:36:44.559875   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:36:44.572541   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:44.572750   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:36:47.014512   13300 main.go:141] libmachine: [stdout =====>] : 172.19.32.249
	
	I0419 18:36:47.026742   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:47.026849   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:36:49.086631   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:36:49.099014   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:49.099014   13300 machine.go:94] provisionDockerMachine start ...
	I0419 18:36:49.099193   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:36:51.150370   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:36:51.164762   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:51.164762   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:36:53.633889   13300 main.go:141] libmachine: [stdout =====>] : 172.19.32.249
	
	I0419 18:36:53.648407   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:53.656197   13300 main.go:141] libmachine: Using SSH client type: native
	I0419 18:36:53.666019   13300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.249 22 <nil> <nil>}
	I0419 18:36:53.666019   13300 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 18:36:53.789018   13300 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0419 18:36:53.789101   13300 buildroot.go:166] provisioning hostname "multinode-348000-m02"
	I0419 18:36:53.789101   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:36:55.837694   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:36:55.850782   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:55.850782   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:36:58.345211   13300 main.go:141] libmachine: [stdout =====>] : 172.19.32.249
	
	I0419 18:36:58.345211   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:36:58.354993   13300 main.go:141] libmachine: Using SSH client type: native
	I0419 18:36:58.354993   13300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.249 22 <nil> <nil>}
	I0419 18:36:58.354993   13300 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-348000-m02 && echo "multinode-348000-m02" | sudo tee /etc/hostname
	I0419 18:36:58.511315   13300 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-348000-m02
	
	I0419 18:36:58.511315   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:37:00.579571   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:37:00.592351   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:00.592351   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:37:03.082436   13300 main.go:141] libmachine: [stdout =====>] : 172.19.32.249
	
	I0419 18:37:03.094806   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:03.102891   13300 main.go:141] libmachine: Using SSH client type: native
	I0419 18:37:03.104134   13300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.249 22 <nil> <nil>}
	I0419 18:37:03.104134   13300 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-348000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-348000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-348000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 18:37:03.248615   13300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 18:37:03.248615   13300 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0419 18:37:03.248615   13300 buildroot.go:174] setting up certificates
	I0419 18:37:03.248615   13300 provision.go:84] configureAuth start
	I0419 18:37:03.248615   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:37:05.316345   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:37:05.316345   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:05.329191   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:37:07.802916   13300 main.go:141] libmachine: [stdout =====>] : 172.19.32.249
	
	I0419 18:37:07.802916   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:07.816178   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:37:09.889223   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:37:09.902323   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:09.902462   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:37:12.388917   13300 main.go:141] libmachine: [stdout =====>] : 172.19.32.249
	
	I0419 18:37:12.388917   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:12.388917   13300 provision.go:143] copyHostCerts
	I0419 18:37:12.403367   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0419 18:37:12.403367   13300 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0419 18:37:12.403367   13300 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0419 18:37:12.404062   13300 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0419 18:37:12.405299   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0419 18:37:12.405543   13300 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0419 18:37:12.405543   13300 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0419 18:37:12.405543   13300 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0419 18:37:12.406999   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0419 18:37:12.407042   13300 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0419 18:37:12.407042   13300 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0419 18:37:12.407666   13300 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0419 18:37:12.408475   13300 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-348000-m02 san=[127.0.0.1 172.19.32.249 localhost minikube multinode-348000-m02]
	I0419 18:37:12.664649   13300 provision.go:177] copyRemoteCerts
	I0419 18:37:12.675278   13300 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 18:37:12.675278   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:37:14.764573   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:37:14.764573   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:14.776639   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:37:17.287919   13300 main.go:141] libmachine: [stdout =====>] : 172.19.32.249
	
	I0419 18:37:17.300317   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:17.300528   13300 sshutil.go:53] new ssh client: &{IP:172.19.32.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\id_rsa Username:docker}
	I0419 18:37:17.400441   13300 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7251516s)
	I0419 18:37:17.400441   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0419 18:37:17.400736   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0419 18:37:17.451826   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0419 18:37:17.451826   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0419 18:37:17.504092   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0419 18:37:17.504092   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 18:37:17.557457   13300 provision.go:87] duration metric: took 14.3088088s to configureAuth
	I0419 18:37:17.557457   13300 buildroot.go:189] setting minikube options for container-runtime
	I0419 18:37:17.558390   13300 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 18:37:17.558531   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:37:19.631786   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:37:19.635293   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:19.635375   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:37:22.147983   13300 main.go:141] libmachine: [stdout =====>] : 172.19.32.249
	
	I0419 18:37:22.149303   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:22.154320   13300 main.go:141] libmachine: Using SSH client type: native
	I0419 18:37:22.154998   13300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.249 22 <nil> <nil>}
	I0419 18:37:22.154998   13300 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0419 18:37:22.276826   13300 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0419 18:37:22.276826   13300 buildroot.go:70] root file system type: tmpfs
	I0419 18:37:22.277056   13300 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0419 18:37:22.277149   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:37:24.343589   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:37:24.343589   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:24.355639   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:37:26.887421   13300 main.go:141] libmachine: [stdout =====>] : 172.19.32.249
	
	I0419 18:37:26.887421   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:26.906720   13300 main.go:141] libmachine: Using SSH client type: native
	I0419 18:37:26.906720   13300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.249 22 <nil> <nil>}
	I0419 18:37:26.906720   13300 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.42.231"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0419 18:37:27.059035   13300 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.42.231
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0419 18:37:27.059641   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:37:29.149131   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:37:29.150955   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:29.151123   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:37:31.672700   13300 main.go:141] libmachine: [stdout =====>] : 172.19.32.249
	
	I0419 18:37:31.672906   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:31.679317   13300 main.go:141] libmachine: Using SSH client type: native
	I0419 18:37:31.679909   13300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.249 22 <nil> <nil>}
	I0419 18:37:31.679986   13300 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0419 18:37:33.822519   13300 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0419 18:37:33.822519   13300 machine.go:97] duration metric: took 44.7234017s to provisionDockerMachine
	I0419 18:37:33.822519   13300 client.go:171] duration metric: took 1m53.094823s to LocalClient.Create
	I0419 18:37:33.822519   13300 start.go:167] duration metric: took 1m53.0949242s to libmachine.API.Create "multinode-348000"
	I0419 18:37:33.823100   13300 start.go:293] postStartSetup for "multinode-348000-m02" (driver="hyperv")
	I0419 18:37:33.823100   13300 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 18:37:33.840494   13300 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 18:37:33.840494   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:37:35.926095   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:37:35.939871   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:35.939968   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:37:38.421151   13300 main.go:141] libmachine: [stdout =====>] : 172.19.32.249
	
	I0419 18:37:38.421151   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:38.421267   13300 sshutil.go:53] new ssh client: &{IP:172.19.32.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\id_rsa Username:docker}
	I0419 18:37:38.522112   13300 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6816075s)
	I0419 18:37:38.537415   13300 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 18:37:38.542754   13300 command_runner.go:130] > NAME=Buildroot
	I0419 18:37:38.542754   13300 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0419 18:37:38.542754   13300 command_runner.go:130] > ID=buildroot
	I0419 18:37:38.545284   13300 command_runner.go:130] > VERSION_ID=2023.02.9
	I0419 18:37:38.545284   13300 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0419 18:37:38.545339   13300 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 18:37:38.545938   13300 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0419 18:37:38.546017   13300 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0419 18:37:38.546826   13300 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> 34162.pem in /etc/ssl/certs
	I0419 18:37:38.546826   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /etc/ssl/certs/34162.pem
	I0419 18:37:38.561426   13300 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 18:37:38.581749   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /etc/ssl/certs/34162.pem (1708 bytes)
	I0419 18:37:38.630459   13300 start.go:296] duration metric: took 4.8073475s for postStartSetup
	I0419 18:37:38.633185   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:37:40.714145   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:37:40.714145   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:40.714145   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:37:43.211094   13300 main.go:141] libmachine: [stdout =====>] : 172.19.32.249
	
	I0419 18:37:43.223496   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:43.223917   13300 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\config.json ...
	I0419 18:37:43.228765   13300 start.go:128] duration metric: took 2m2.5046511s to createHost
	I0419 18:37:43.228765   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:37:45.279007   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:37:45.288437   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:45.288437   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:37:47.774469   13300 main.go:141] libmachine: [stdout =====>] : 172.19.32.249
	
	I0419 18:37:47.774469   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:47.781043   13300 main.go:141] libmachine: Using SSH client type: native
	I0419 18:37:47.781171   13300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.249 22 <nil> <nil>}
	I0419 18:37:47.781171   13300 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 18:37:47.903930   13300 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713577067.897162437
	
	I0419 18:37:47.903930   13300 fix.go:216] guest clock: 1713577067.897162437
	I0419 18:37:47.903930   13300 fix.go:229] Guest: 2024-04-19 18:37:47.897162437 -0700 PDT Remote: 2024-04-19 18:37:43.2287654 -0700 PDT m=+335.296254301 (delta=4.668397037s)
	I0419 18:37:47.904023   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:37:49.975315   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:37:49.988741   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:49.988799   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:37:52.511238   13300 main.go:141] libmachine: [stdout =====>] : 172.19.32.249
	
	I0419 18:37:52.511238   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:52.517804   13300 main.go:141] libmachine: Using SSH client type: native
	I0419 18:37:52.518348   13300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.32.249 22 <nil> <nil>}
	I0419 18:37:52.518348   13300 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713577067
	I0419 18:37:52.661287   13300 main.go:141] libmachine: SSH cmd err, output: <nil>: Sat Apr 20 01:37:47 UTC 2024
	
	I0419 18:37:52.661377   13300 fix.go:236] clock set: Sat Apr 20 01:37:47 UTC 2024
	 (err=<nil>)
	I0419 18:37:52.661377   13300 start.go:83] releasing machines lock for "multinode-348000-m02", held for 2m11.9382971s
	I0419 18:37:52.661377   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:37:54.741667   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:37:54.741667   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:54.754382   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:37:57.232449   13300 main.go:141] libmachine: [stdout =====>] : 172.19.32.249
	
	I0419 18:37:57.232449   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:57.235951   13300 out.go:177] * Found network options:
	I0419 18:37:57.238683   13300 out.go:177]   - NO_PROXY=172.19.42.231
	W0419 18:37:57.240993   13300 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 18:37:57.243206   13300 out.go:177]   - NO_PROXY=172.19.42.231
	W0419 18:37:57.245668   13300 proxy.go:119] fail to check proxy env: Error ip not in block
	W0419 18:37:57.246224   13300 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 18:37:57.251011   13300 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 18:37:57.251143   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:37:57.255687   13300 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0419 18:37:57.255687   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:37:59.346493   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:37:59.346744   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:59.346744   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:37:59.346872   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:37:59.346872   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:37:59.347067   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:38:01.944634   13300 main.go:141] libmachine: [stdout =====>] : 172.19.32.249
	
	I0419 18:38:01.944634   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:38:01.944634   13300 sshutil.go:53] new ssh client: &{IP:172.19.32.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\id_rsa Username:docker}
	I0419 18:38:01.967980   13300 main.go:141] libmachine: [stdout =====>] : 172.19.32.249
	
	I0419 18:38:01.967980   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:38:01.968049   13300 sshutil.go:53] new ssh client: &{IP:172.19.32.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\id_rsa Username:docker}
	I0419 18:38:02.028215   13300 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0419 18:38:02.035002   13300 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7791689s)
	W0419 18:38:02.035063   13300 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 18:38:02.049929   13300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 18:38:02.169447   13300 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0419 18:38:02.169447   13300 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0419 18:38:02.169447   13300 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 18:38:02.169447   13300 start.go:494] detecting cgroup driver to use...
	I0419 18:38:02.169447   13300 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.918354s)
	I0419 18:38:02.169447   13300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 18:38:02.205258   13300 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0419 18:38:02.221546   13300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0419 18:38:02.259066   13300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0419 18:38:02.282688   13300 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0419 18:38:02.298830   13300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0419 18:38:02.334493   13300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 18:38:02.374559   13300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0419 18:38:02.406898   13300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 18:38:02.440977   13300 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 18:38:02.478373   13300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0419 18:38:02.511811   13300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0419 18:38:02.545737   13300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0419 18:38:02.579240   13300 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 18:38:02.599972   13300 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0419 18:38:02.611378   13300 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 18:38:02.645041   13300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:38:02.846453   13300 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0419 18:38:02.877497   13300 start.go:494] detecting cgroup driver to use...
	I0419 18:38:02.891191   13300 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0419 18:38:02.919788   13300 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0419 18:38:02.919788   13300 command_runner.go:130] > [Unit]
	I0419 18:38:02.919788   13300 command_runner.go:130] > Description=Docker Application Container Engine
	I0419 18:38:02.919788   13300 command_runner.go:130] > Documentation=https://docs.docker.com
	I0419 18:38:02.919788   13300 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0419 18:38:02.919788   13300 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0419 18:38:02.919788   13300 command_runner.go:130] > StartLimitBurst=3
	I0419 18:38:02.919788   13300 command_runner.go:130] > StartLimitIntervalSec=60
	I0419 18:38:02.919788   13300 command_runner.go:130] > [Service]
	I0419 18:38:02.919788   13300 command_runner.go:130] > Type=notify
	I0419 18:38:02.919788   13300 command_runner.go:130] > Restart=on-failure
	I0419 18:38:02.919788   13300 command_runner.go:130] > Environment=NO_PROXY=172.19.42.231
	I0419 18:38:02.919788   13300 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0419 18:38:02.919788   13300 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0419 18:38:02.919788   13300 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0419 18:38:02.919788   13300 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0419 18:38:02.919788   13300 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0419 18:38:02.919788   13300 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0419 18:38:02.919788   13300 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0419 18:38:02.919788   13300 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0419 18:38:02.919788   13300 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0419 18:38:02.919788   13300 command_runner.go:130] > ExecStart=
	I0419 18:38:02.919788   13300 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0419 18:38:02.919788   13300 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0419 18:38:02.919788   13300 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0419 18:38:02.920425   13300 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0419 18:38:02.920425   13300 command_runner.go:130] > LimitNOFILE=infinity
	I0419 18:38:02.920425   13300 command_runner.go:130] > LimitNPROC=infinity
	I0419 18:38:02.920425   13300 command_runner.go:130] > LimitCORE=infinity
	I0419 18:38:02.920425   13300 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0419 18:38:02.920502   13300 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0419 18:38:02.920502   13300 command_runner.go:130] > TasksMax=infinity
	I0419 18:38:02.920502   13300 command_runner.go:130] > TimeoutStartSec=0
	I0419 18:38:02.920502   13300 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0419 18:38:02.920502   13300 command_runner.go:130] > Delegate=yes
	I0419 18:38:02.920562   13300 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0419 18:38:02.920562   13300 command_runner.go:130] > KillMode=process
	I0419 18:38:02.920562   13300 command_runner.go:130] > [Install]
	I0419 18:38:02.920562   13300 command_runner.go:130] > WantedBy=multi-user.target
	I0419 18:38:02.934939   13300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 18:38:02.973898   13300 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 18:38:03.033196   13300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 18:38:03.076182   13300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 18:38:03.118178   13300 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0419 18:38:03.176740   13300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 18:38:03.201082   13300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 18:38:03.234678   13300 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0419 18:38:03.251591   13300 ssh_runner.go:195] Run: which cri-dockerd
	I0419 18:38:03.257708   13300 command_runner.go:130] > /usr/bin/cri-dockerd
	I0419 18:38:03.272666   13300 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0419 18:38:03.291863   13300 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0419 18:38:03.339851   13300 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0419 18:38:03.548034   13300 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0419 18:38:03.736042   13300 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0419 18:38:03.736127   13300 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0419 18:38:03.781535   13300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:38:03.976892   13300 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 18:38:06.489786   13300 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5125403s)
	I0419 18:38:06.506156   13300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0419 18:38:06.541131   13300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 18:38:06.582721   13300 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0419 18:38:06.782580   13300 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0419 18:38:06.994510   13300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:38:07.189737   13300 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0419 18:38:07.235598   13300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 18:38:07.271698   13300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:38:07.477927   13300 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0419 18:38:07.587589   13300 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0419 18:38:07.602646   13300 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0419 18:38:07.613902   13300 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0419 18:38:07.614059   13300 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0419 18:38:07.614059   13300 command_runner.go:130] > Device: 0,22	Inode: 879         Links: 1
	I0419 18:38:07.614133   13300 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0419 18:38:07.614133   13300 command_runner.go:130] > Access: 2024-04-20 01:38:07.497590885 +0000
	I0419 18:38:07.614133   13300 command_runner.go:130] > Modify: 2024-04-20 01:38:07.497590885 +0000
	I0419 18:38:07.614133   13300 command_runner.go:130] > Change: 2024-04-20 01:38:07.501590896 +0000
	I0419 18:38:07.614133   13300 command_runner.go:130] >  Birth: -
	I0419 18:38:07.614133   13300 start.go:562] Will wait 60s for crictl version
	I0419 18:38:07.630417   13300 ssh_runner.go:195] Run: which crictl
	I0419 18:38:07.636212   13300 command_runner.go:130] > /usr/bin/crictl
	I0419 18:38:07.650403   13300 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 18:38:07.707052   13300 command_runner.go:130] > Version:  0.1.0
	I0419 18:38:07.707052   13300 command_runner.go:130] > RuntimeName:  docker
	I0419 18:38:07.707052   13300 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0419 18:38:07.707052   13300 command_runner.go:130] > RuntimeApiVersion:  v1
	I0419 18:38:07.707225   13300 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0419 18:38:07.716957   13300 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 18:38:07.738050   13300 command_runner.go:130] > 26.0.1
	I0419 18:38:07.762455   13300 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 18:38:07.790914   13300 command_runner.go:130] > 26.0.1
	I0419 18:38:07.797666   13300 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0419 18:38:07.800011   13300 out.go:177]   - env NO_PROXY=172.19.42.231
	I0419 18:38:07.802531   13300 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0419 18:38:07.806114   13300 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0419 18:38:07.806114   13300 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0419 18:38:07.806114   13300 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0419 18:38:07.806114   13300 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8c:b9:25 Flags:up|broadcast|multicast|running}
	I0419 18:38:07.807889   13300 ip.go:210] interface addr: fe80::ce04:318e:a1d8:4460/64
	I0419 18:38:07.807889   13300 ip.go:210] interface addr: 172.19.32.1/20
	I0419 18:38:07.825060   13300 ssh_runner.go:195] Run: grep 172.19.32.1	host.minikube.internal$ /etc/hosts
	I0419 18:38:07.831486   13300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.32.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 18:38:07.849178   13300 mustload.go:65] Loading cluster: multinode-348000
	I0419 18:38:07.853598   13300 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 18:38:07.853912   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:38:09.862788   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:38:09.862788   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:38:09.862788   13300 host.go:66] Checking if "multinode-348000" exists ...
	I0419 18:38:09.863565   13300 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000 for IP: 172.19.32.249
	I0419 18:38:09.863565   13300 certs.go:194] generating shared ca certs ...
	I0419 18:38:09.863565   13300 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:38:09.863824   13300 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0419 18:38:09.864406   13300 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0419 18:38:09.864585   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 18:38:09.864841   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0419 18:38:09.865094   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 18:38:09.865261   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 18:38:09.865944   13300 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem (1338 bytes)
	W0419 18:38:09.866192   13300 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416_empty.pem, impossibly tiny 0 bytes
	I0419 18:38:09.866353   13300 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0419 18:38:09.866392   13300 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0419 18:38:09.867039   13300 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0419 18:38:09.867416   13300 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0419 18:38:09.867890   13300 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem (1708 bytes)
	I0419 18:38:09.868127   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem -> /usr/share/ca-certificates/3416.pem
	I0419 18:38:09.868303   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /usr/share/ca-certificates/34162.pem
	I0419 18:38:09.868464   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 18:38:09.868692   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 18:38:09.921043   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 18:38:09.963552   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 18:38:10.010791   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 18:38:10.055938   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem --> /usr/share/ca-certificates/3416.pem (1338 bytes)
	I0419 18:38:10.107094   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /usr/share/ca-certificates/34162.pem (1708 bytes)
	I0419 18:38:10.152968   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 18:38:10.206460   13300 ssh_runner.go:195] Run: openssl version
	I0419 18:38:10.219814   13300 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0419 18:38:10.236408   13300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3416.pem && ln -fs /usr/share/ca-certificates/3416.pem /etc/ssl/certs/3416.pem"
	I0419 18:38:10.267322   13300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3416.pem
	I0419 18:38:10.275067   13300 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 20 00:10 /usr/share/ca-certificates/3416.pem
	I0419 18:38:10.275204   13300 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:10 /usr/share/ca-certificates/3416.pem
	I0419 18:38:10.288606   13300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3416.pem
	I0419 18:38:10.294015   13300 command_runner.go:130] > 51391683
	I0419 18:38:10.309775   13300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3416.pem /etc/ssl/certs/51391683.0"
	I0419 18:38:10.346697   13300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34162.pem && ln -fs /usr/share/ca-certificates/34162.pem /etc/ssl/certs/34162.pem"
	I0419 18:38:10.381465   13300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34162.pem
	I0419 18:38:10.389313   13300 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 20 00:10 /usr/share/ca-certificates/34162.pem
	I0419 18:38:10.389313   13300 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:10 /usr/share/ca-certificates/34162.pem
	I0419 18:38:10.400883   13300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34162.pem
	I0419 18:38:10.411518   13300 command_runner.go:130] > 3ec20f2e
	I0419 18:38:10.427543   13300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34162.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 18:38:10.465796   13300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 18:38:10.497378   13300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 18:38:10.506810   13300 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 20 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I0419 18:38:10.507169   13300 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 20 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I0419 18:38:10.524536   13300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 18:38:10.533206   13300 command_runner.go:130] > b5213941
	I0419 18:38:10.551630   13300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 18:38:10.585339   13300 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 18:38:10.592199   13300 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 18:38:10.592509   13300 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 18:38:10.592509   13300 kubeadm.go:928] updating node {m02 172.19.32.249 8443 v1.30.0 docker false true} ...
	I0419 18:38:10.593111   13300 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-348000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.32.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-348000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 18:38:10.608784   13300 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 18:38:10.627128   13300 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	I0419 18:38:10.627773   13300 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0419 18:38:10.643919   13300 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0419 18:38:10.665112   13300 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0419 18:38:10.665112   13300 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0419 18:38:10.665112   13300 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0419 18:38:10.665814   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0419 18:38:10.665814   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0419 18:38:10.682058   13300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 18:38:10.682462   13300 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0419 18:38:10.684845   13300 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0419 18:38:10.705561   13300 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0419 18:38:10.705662   13300 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0419 18:38:10.705662   13300 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0419 18:38:10.705662   13300 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0419 18:38:10.705662   13300 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0419 18:38:10.705662   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0419 18:38:10.705662   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0419 18:38:10.723434   13300 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0419 18:38:10.825057   13300 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0419 18:38:10.836911   13300 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0419 18:38:10.836956   13300 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0419 18:38:12.188789   13300 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0419 18:38:12.203726   13300 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0419 18:38:12.244783   13300 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 18:38:12.290152   13300 ssh_runner.go:195] Run: grep 172.19.42.231	control-plane.minikube.internal$ /etc/hosts
	I0419 18:38:12.298516   13300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.42.231	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 18:38:12.340078   13300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:38:12.559948   13300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 18:38:12.600302   13300 host.go:66] Checking if "multinode-348000" exists ...
	I0419 18:38:12.602443   13300 start.go:316] joinCluster: &{Name:multinode-348000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-348000 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.42.231 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.32.249 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\j
enkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 18:38:12.602443   13300 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0419 18:38:12.602443   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:38:14.705532   13300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:38:14.708807   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:38:14.708885   13300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:38:17.169299   13300 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:38:17.183675   13300 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:38:17.183675   13300 sshutil.go:53] new ssh client: &{IP:172.19.42.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\id_rsa Username:docker}
	I0419 18:38:17.376985   13300 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ij4zt9.ndq4x1g0ttwa8qhc --discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 
	I0419 18:38:17.376985   13300 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.7745311s)
	I0419 18:38:17.376985   13300 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.19.32.249 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0419 18:38:17.376985   13300 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ij4zt9.ndq4x1g0ttwa8qhc --discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-348000-m02"
	I0419 18:38:17.579371   13300 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0419 18:38:18.899961   13300 command_runner.go:130] > [preflight] Running pre-flight checks
	I0419 18:38:18.900063   13300 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0419 18:38:18.900271   13300 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0419 18:38:18.900333   13300 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 18:38:18.900403   13300 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 18:38:18.900503   13300 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0419 18:38:18.900628   13300 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0419 18:38:18.900810   13300 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002718963s
	I0419 18:38:18.900871   13300 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0419 18:38:18.900871   13300 command_runner.go:130] > This node has joined the cluster:
	I0419 18:38:18.900871   13300 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0419 18:38:18.900871   13300 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0419 18:38:18.900871   13300 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0419 18:38:18.900871   13300 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ij4zt9.ndq4x1g0ttwa8qhc --discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-348000-m02": (1.5238826s)
	I0419 18:38:18.900871   13300 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0419 18:38:19.104877   13300 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0419 18:38:19.311749   13300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-348000-m02 minikube.k8s.io/updated_at=2024_04_19T18_38_19_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=multinode-348000 minikube.k8s.io/primary=false
	I0419 18:38:19.463760   13300 command_runner.go:130] > node/multinode-348000-m02 labeled
	I0419 18:38:19.463949   13300 start.go:318] duration metric: took 6.861491s to joinCluster
	I0419 18:38:19.464199   13300 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.19.32.249 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0419 18:38:19.467843   13300 out.go:177] * Verifying Kubernetes components...
	I0419 18:38:19.464871   13300 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 18:38:19.483759   13300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:38:19.697665   13300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 18:38:19.733368   13300 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 18:38:19.734087   13300 kapi.go:59] client config for multinode-348000: &rest.Config{Host:"https://172.19.42.231:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-348000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-348000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c35620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 18:38:19.734878   13300 node_ready.go:35] waiting up to 6m0s for node "multinode-348000-m02" to be "Ready" ...
	I0419 18:38:19.735117   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:19.735117   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:19.735117   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:19.735178   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:19.749248   13300 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0419 18:38:19.749248   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:19.749248   13300 round_trippers.go:580]     Audit-Id: c20f07ad-f25c-4e1b-b89c-2cacaf2e6ccb
	I0419 18:38:19.749248   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:19.749248   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:19.749248   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:19.749248   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:19.749248   13300 round_trippers.go:580]     Content-Length: 3920
	I0419 18:38:19.751315   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:19 GMT
	I0419 18:38:19.751434   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"580","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2896 chars]
	I0419 18:38:20.247263   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:20.247497   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:20.247497   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:20.247497   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:20.248138   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:20.248138   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:20.248138   13300 round_trippers.go:580]     Audit-Id: 13977c51-28ad-4b67-909e-d81d67c0c533
	I0419 18:38:20.248138   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:20.248138   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:20.251964   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:20.251964   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:20.251964   13300 round_trippers.go:580]     Content-Length: 3920
	I0419 18:38:20.251964   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:20 GMT
	I0419 18:38:20.252080   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"580","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2896 chars]
	I0419 18:38:20.747763   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:20.747911   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:20.747911   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:20.747911   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:20.748235   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:20.748235   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:20.748235   13300 round_trippers.go:580]     Content-Length: 3920
	I0419 18:38:20.748235   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:20 GMT
	I0419 18:38:20.748235   13300 round_trippers.go:580]     Audit-Id: 944dca1d-5a76-442a-8cd8-dcd858e56cf1
	I0419 18:38:20.748235   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:20.748235   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:20.748235   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:20.748235   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:20.752413   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"580","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2896 chars]
	I0419 18:38:21.245167   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:21.245408   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:21.245550   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:21.245550   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:21.251991   13300 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:38:21.251991   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:21.251991   13300 round_trippers.go:580]     Audit-Id: 3c32d067-4fbc-4d10-8234-e9d40a19ac9f
	I0419 18:38:21.252085   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:21.252085   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:21.252085   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:21.252085   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:21.252085   13300 round_trippers.go:580]     Content-Length: 3920
	I0419 18:38:21.252085   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:21 GMT
	I0419 18:38:21.252177   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"580","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2896 chars]
	I0419 18:38:21.757044   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:21.757044   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:21.757207   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:21.757207   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:21.757601   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:21.757601   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:21.757601   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:21.757601   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:21.757601   13300 round_trippers.go:580]     Content-Length: 3920
	I0419 18:38:21.757601   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:21 GMT
	I0419 18:38:21.757601   13300 round_trippers.go:580]     Audit-Id: 2fe214e3-c7e0-46a6-87ae-a38b7fd42a19
	I0419 18:38:21.761171   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:21.761171   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:21.761304   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"580","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2896 chars]
	I0419 18:38:21.761728   13300 node_ready.go:53] node "multinode-348000-m02" has status "Ready":"False"
	I0419 18:38:22.246767   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:22.246767   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:22.246767   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:22.246767   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:22.279084   13300 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0419 18:38:22.292220   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:22.292220   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:22.292220   13300 round_trippers.go:580]     Content-Length: 3920
	I0419 18:38:22.292220   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:22 GMT
	I0419 18:38:22.292220   13300 round_trippers.go:580]     Audit-Id: e04891d0-3cb5-47ab-b6fa-52d6e297d62e
	I0419 18:38:22.292220   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:22.292220   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:22.292220   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:22.292220   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"580","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2896 chars]
	I0419 18:38:22.747493   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:22.747493   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:22.747493   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:22.747493   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:22.751369   13300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:38:22.751369   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:22.751369   13300 round_trippers.go:580]     Audit-Id: 681eb9eb-41d0-4545-aa7a-d107d88575a7
	I0419 18:38:22.751369   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:22.751369   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:22.751369   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:22.751369   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:22.751369   13300 round_trippers.go:580]     Content-Length: 4029
	I0419 18:38:22.751369   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:22 GMT
	I0419 18:38:22.751369   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"588","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0419 18:38:23.240600   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:23.240600   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:23.240600   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:23.240600   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:23.241643   13300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 18:38:23.241643   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:23.241643   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:23.241643   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:23.241643   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:23.241643   13300 round_trippers.go:580]     Content-Length: 4029
	I0419 18:38:23.241643   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:23 GMT
	I0419 18:38:23.245230   13300 round_trippers.go:580]     Audit-Id: d3bdb8e0-d404-45b4-858b-8b2dec57005a
	I0419 18:38:23.245230   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:23.245322   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"588","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0419 18:38:23.739010   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:23.739010   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:23.739010   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:23.739010   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:23.739546   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:23.746710   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:23.746710   13300 round_trippers.go:580]     Audit-Id: 5d6faae5-69ca-4f30-944d-67b2f55344e8
	I0419 18:38:23.746710   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:23.746710   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:23.746710   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:23.746710   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:23.746710   13300 round_trippers.go:580]     Content-Length: 4029
	I0419 18:38:23.746710   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:23 GMT
	I0419 18:38:23.746710   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"588","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0419 18:38:24.244768   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:24.244768   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:24.244768   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:24.244768   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:24.248208   13300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:38:24.248208   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:24.249376   13300 round_trippers.go:580]     Content-Length: 4029
	I0419 18:38:24.249376   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:24 GMT
	I0419 18:38:24.249376   13300 round_trippers.go:580]     Audit-Id: bd7d4c8d-90ca-46fc-af2e-dce0bd2db621
	I0419 18:38:24.249376   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:24.249376   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:24.249376   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:24.249376   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:24.249589   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"588","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0419 18:38:24.249804   13300 node_ready.go:53] node "multinode-348000-m02" has status "Ready":"False"
	I0419 18:38:24.743104   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:24.743104   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:24.743104   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:24.743104   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:24.752100   13300 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 18:38:24.752100   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:24.752178   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:24.752219   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:24.752219   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:24.752219   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:24.752219   13300 round_trippers.go:580]     Content-Length: 4029
	I0419 18:38:24.752219   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:24 GMT
	I0419 18:38:24.752219   13300 round_trippers.go:580]     Audit-Id: e2e6cbcd-9d6f-4d88-8d18-211029c3199f
	I0419 18:38:24.752219   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"588","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0419 18:38:25.236810   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:25.236810   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:25.236810   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:25.236810   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:25.241245   13300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:38:25.241245   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:25.241245   13300 round_trippers.go:580]     Audit-Id: 1fa95447-193d-43bf-9675-0c1c429f99b8
	I0419 18:38:25.241245   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:25.241245   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:25.241245   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:25.241245   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:25.241245   13300 round_trippers.go:580]     Content-Length: 4029
	I0419 18:38:25.241245   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:25 GMT
	I0419 18:38:25.241245   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"588","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0419 18:38:25.735725   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:25.735725   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:25.735725   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:25.735725   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:25.740157   13300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:38:25.740157   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:25.740157   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:25 GMT
	I0419 18:38:25.740157   13300 round_trippers.go:580]     Audit-Id: 307d2b2f-3de8-470b-88b5-c8c7293710b7
	I0419 18:38:25.740157   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:25.740157   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:25.740157   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:25.740157   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:25.740157   13300 round_trippers.go:580]     Content-Length: 4029
	I0419 18:38:25.740157   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"588","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0419 18:38:26.247687   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:26.247796   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:26.247796   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:26.247796   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:26.248175   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:26.248175   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:26.248175   13300 round_trippers.go:580]     Audit-Id: 951de83e-2eac-4d73-b2f1-592ce919f697
	I0419 18:38:26.248175   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:26.248175   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:26.248175   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:26.248175   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:26.248175   13300 round_trippers.go:580]     Content-Length: 4029
	I0419 18:38:26.248175   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:26 GMT
	I0419 18:38:26.248175   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"588","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0419 18:38:26.749191   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:26.749191   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:26.749191   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:26.749191   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:26.750905   13300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 18:38:26.750905   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:26.753874   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:26.753874   13300 round_trippers.go:580]     Content-Length: 4029
	I0419 18:38:26.753874   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:26 GMT
	I0419 18:38:26.753874   13300 round_trippers.go:580]     Audit-Id: 8954b9a9-8fcc-49b9-95b5-a57dad137716
	I0419 18:38:26.753874   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:26.753874   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:26.753874   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:26.754015   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"588","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0419 18:38:26.754387   13300 node_ready.go:53] node "multinode-348000-m02" has status "Ready":"False"
	I0419 18:38:27.246552   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:27.246618   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:27.246618   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:27.246618   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:27.248452   13300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 18:38:27.248452   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:27.248452   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:27 GMT
	I0419 18:38:27.248452   13300 round_trippers.go:580]     Audit-Id: 232a0c78-2485-469d-9579-efc568215ab9
	I0419 18:38:27.248452   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:27.248452   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:27.248452   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:27.248452   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:27.248452   13300 round_trippers.go:580]     Content-Length: 4029
	I0419 18:38:27.250983   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"588","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0419 18:38:27.745449   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:27.745449   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:27.745449   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:27.745449   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:28.092864   13300 round_trippers.go:574] Response Status: 200 OK in 347 milliseconds
	I0419 18:38:28.093075   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:28.093075   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:28.093075   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:28.093075   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:28.093075   13300 round_trippers.go:580]     Content-Length: 4029
	I0419 18:38:28.093075   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:28 GMT
	I0419 18:38:28.093075   13300 round_trippers.go:580]     Audit-Id: 23a75f39-0c6b-436e-a99e-496f7f8437a9
	I0419 18:38:28.093075   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:28.093207   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"588","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0419 18:38:28.251089   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:28.251089   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:28.251204   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:28.251204   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:28.251439   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:28.251439   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:28.254966   13300 round_trippers.go:580]     Audit-Id: 0dc14df5-19ac-456c-8fb8-17654ed89f18
	I0419 18:38:28.254966   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:28.254966   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:28.254966   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:28.254966   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:28.254966   13300 round_trippers.go:580]     Content-Length: 4029
	I0419 18:38:28.254966   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:28 GMT
	I0419 18:38:28.255079   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"588","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0419 18:38:28.737177   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:28.737418   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:28.737418   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:28.737418   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:28.739904   13300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:38:28.741328   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:28.741328   13300 round_trippers.go:580]     Audit-Id: 0c8446aa-69e0-4c25-b87c-26934d35eee0
	I0419 18:38:28.741392   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:28.741392   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:28.741392   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:28.741392   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:28.741392   13300 round_trippers.go:580]     Content-Length: 4029
	I0419 18:38:28.741392   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:28 GMT
	I0419 18:38:28.741561   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"588","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0419 18:38:29.242169   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:29.242169   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:29.242169   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:29.242169   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:29.242720   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:29.242720   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:29.242720   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:29.242720   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:29.242720   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:29.242720   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:29.242720   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:29 GMT
	I0419 18:38:29.242720   13300 round_trippers.go:580]     Audit-Id: 6cd5e775-504d-4606-9903-ad55f3185468
	I0419 18:38:29.246516   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"600","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0419 18:38:29.246918   13300 node_ready.go:53] node "multinode-348000-m02" has status "Ready":"False"
	I0419 18:38:29.740086   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:29.740140   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:29.740140   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:29.740140   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:29.742058   13300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 18:38:29.744595   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:29.744595   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:29.744661   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:29 GMT
	I0419 18:38:29.744661   13300 round_trippers.go:580]     Audit-Id: 456006a6-1a48-40a9-b493-1411df260546
	I0419 18:38:29.744661   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:29.744661   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:29.744661   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:29.744661   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"600","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0419 18:38:30.250626   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:30.250626   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:30.250626   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:30.250626   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:30.252205   13300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 18:38:30.252205   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:30.252205   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:30.252205   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:30.254983   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:30 GMT
	I0419 18:38:30.254983   13300 round_trippers.go:580]     Audit-Id: de90e255-3cf9-45c5-a12d-855f8876d21c
	I0419 18:38:30.254983   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:30.254983   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:30.255268   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"600","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0419 18:38:30.738021   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:30.738021   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:30.738021   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:30.738021   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:30.738729   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:30.742326   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:30.742326   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:30.742326   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:30 GMT
	I0419 18:38:30.742326   13300 round_trippers.go:580]     Audit-Id: fdb18965-51e6-4a70-b8c8-68cb2acecb62
	I0419 18:38:30.742382   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:30.742382   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:30.742382   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:30.742604   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"600","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0419 18:38:31.247440   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:31.247487   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:31.247554   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:31.247554   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:31.247919   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:31.251831   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:31.251831   13300 round_trippers.go:580]     Audit-Id: e3e496f5-07b1-4552-8b3b-8d0c6d2fef8f
	I0419 18:38:31.251831   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:31.251831   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:31.251912   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:31.251912   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:31.251912   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:31 GMT
	I0419 18:38:31.252309   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"600","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0419 18:38:31.252706   13300 node_ready.go:53] node "multinode-348000-m02" has status "Ready":"False"
	I0419 18:38:31.742234   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:31.742352   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:31.742352   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:31.742352   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:31.743322   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:31.746152   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:31.746152   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:31.746152   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:31.746152   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:31 GMT
	I0419 18:38:31.746152   13300 round_trippers.go:580]     Audit-Id: 186d4ab2-af32-48fd-823d-494bf80546a1
	I0419 18:38:31.746152   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:31.746211   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:31.746411   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"600","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0419 18:38:32.242800   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:32.242850   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:32.242896   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:32.242896   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:32.243208   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:32.246549   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:32.246610   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:32.246610   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:32.246610   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:32.246610   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:32.246610   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:32 GMT
	I0419 18:38:32.246610   13300 round_trippers.go:580]     Audit-Id: aa348197-efc6-435e-ae43-6a0d96bd99da
	I0419 18:38:32.246846   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"600","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0419 18:38:32.736470   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:32.736470   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:32.736470   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:32.736470   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:32.742888   13300 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:38:32.742888   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:32.742888   13300 round_trippers.go:580]     Audit-Id: 1fba41d5-2faf-421c-9cb8-651fe0be7fad
	I0419 18:38:32.742888   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:32.742888   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:32.742888   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:32.742888   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:32.742888   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:32 GMT
	I0419 18:38:32.742888   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"600","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0419 18:38:33.245352   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:33.245561   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:33.245621   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:33.245621   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:33.246844   13300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 18:38:33.249768   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:33.249768   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:33.249768   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:33.249768   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:33.249768   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:33 GMT
	I0419 18:38:33.249768   13300 round_trippers.go:580]     Audit-Id: 44805f61-7ed2-4c3c-b65a-bd0dfe896b4f
	I0419 18:38:33.249768   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:33.249872   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"600","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0419 18:38:33.741561   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:33.741625   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:33.741625   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:33.741685   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:33.741952   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:33.741952   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:33.741952   13300 round_trippers.go:580]     Audit-Id: c8b6befc-9593-4a26-b45b-59943e0c5724
	I0419 18:38:33.741952   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:33.741952   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:33.741952   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:33.741952   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:33.741952   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:33 GMT
	I0419 18:38:33.746503   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"600","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0419 18:38:33.747160   13300 node_ready.go:53] node "multinode-348000-m02" has status "Ready":"False"
	I0419 18:38:34.237109   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:34.237109   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:34.237321   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:34.237321   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:34.245633   13300 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 18:38:34.245633   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:34.245698   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:34 GMT
	I0419 18:38:34.245698   13300 round_trippers.go:580]     Audit-Id: 2b65ad09-35a4-40b4-87f8-4a8cdad85f58
	I0419 18:38:34.245698   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:34.245698   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:34.245698   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:34.245698   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:34.246015   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"600","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0419 18:38:34.744352   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:34.744352   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:34.744352   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:34.744352   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:34.744960   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:34.749145   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:34.749145   13300 round_trippers.go:580]     Audit-Id: 755b8917-e664-4381-9d27-c356a1505a5e
	I0419 18:38:34.749145   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:34.749145   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:34.749145   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:34.749145   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:34.749145   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:34 GMT
	I0419 18:38:34.749232   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"600","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0419 18:38:35.236250   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:35.236499   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:35.236499   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:35.236499   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:35.236752   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:35.236752   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:35.240982   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:35.240982   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:35 GMT
	I0419 18:38:35.240982   13300 round_trippers.go:580]     Audit-Id: abc79a50-b87a-4f32-952b-c6f1b7091efa
	I0419 18:38:35.240982   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:35.240982   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:35.240982   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:35.241141   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"600","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0419 18:38:35.745735   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:35.745792   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:35.745792   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:35.745792   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:35.746345   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:35.746345   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:35.746345   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:35 GMT
	I0419 18:38:35.746345   13300 round_trippers.go:580]     Audit-Id: 92f04679-6874-4ab6-ba31-9ac3e67c5bff
	I0419 18:38:35.746345   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:35.746345   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:35.746345   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:35.750718   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:35.750839   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"600","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0419 18:38:35.751173   13300 node_ready.go:53] node "multinode-348000-m02" has status "Ready":"False"
	I0419 18:38:36.235748   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:36.235748   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:36.235748   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:36.235748   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:36.236696   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:36.240036   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:36.240036   13300 round_trippers.go:580]     Audit-Id: edab4b7a-89e6-442c-90f8-c8d497d7136c
	I0419 18:38:36.240036   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:36.240036   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:36.240036   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:36.240036   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:36.240036   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:36 GMT
	I0419 18:38:36.240547   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"600","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0419 18:38:36.738378   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:36.738378   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:36.738378   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:36.738378   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:36.738955   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:36.738955   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:36.738955   13300 round_trippers.go:580]     Audit-Id: 08c89ac5-e772-4e5d-a23f-a37fdc54f9e0
	I0419 18:38:36.738955   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:36.738955   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:36.745184   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:36.745184   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:36.745184   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:36 GMT
	I0419 18:38:36.745572   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"600","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0419 18:38:37.265320   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:37.265410   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:37.265410   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:37.265410   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:37.266373   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:37.266373   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:37.266373   13300 round_trippers.go:580]     Audit-Id: 46735695-22e6-4dc2-b4a4-6002184e122a
	I0419 18:38:37.266373   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:37.266373   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:37.266373   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:37.266373   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:37.266373   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:37 GMT
	I0419 18:38:37.271736   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"600","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0419 18:38:37.749504   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:37.749609   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:37.749609   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:37.749609   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:37.750046   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:37.750046   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:37.750046   13300 round_trippers.go:580]     Audit-Id: 564388e4-d5d2-484c-bace-8a87c363b40e
	I0419 18:38:37.750046   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:37.754434   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:37.754434   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:37.754434   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:37.754434   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:37 GMT
	I0419 18:38:37.755193   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"617","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3263 chars]
	I0419 18:38:37.755748   13300 node_ready.go:49] node "multinode-348000-m02" has status "Ready":"True"
	I0419 18:38:37.755748   13300 node_ready.go:38] duration metric: took 18.0208288s for node "multinode-348000-m02" to be "Ready" ...
	I0419 18:38:37.755855   13300 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 18:38:37.755962   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods
	I0419 18:38:37.756080   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:37.756093   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:37.756108   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:37.757091   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:37.757091   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:37.757091   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:37.757091   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:37.757091   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:37.757091   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:37 GMT
	I0419 18:38:37.761031   13300 round_trippers.go:580]     Audit-Id: f232ef38-1be9-4fac-9a5a-d7300e9f6a8d
	I0419 18:38:37.761031   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:37.762513   13300 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"617"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"424","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70428 chars]
	I0419 18:38:37.766177   13300 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace to be "Ready" ...
	I0419 18:38:37.766381   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:38:37.766381   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:37.766381   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:37.766381   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:37.767006   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:37.767006   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:37.767006   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:37.767006   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:37.767006   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:37.767006   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:37.767006   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:37 GMT
	I0419 18:38:37.769819   13300 round_trippers.go:580]     Audit-Id: 0e57b121-83a0-4769-8fd2-c8ee16cf662f
	I0419 18:38:37.769958   13300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"424","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0419 18:38:37.770328   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:38:37.770328   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:37.770328   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:37.770328   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:37.770934   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:37.770934   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:37.770934   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:37.770934   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:37.773605   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:37.773605   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:37.773605   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:37 GMT
	I0419 18:38:37.773605   13300 round_trippers.go:580]     Audit-Id: e53a5beb-cd31-434a-9f1f-e1feb2fbf898
	I0419 18:38:37.773916   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"420","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0419 18:38:37.774459   13300 pod_ready.go:92] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"True"
	I0419 18:38:37.774459   13300 pod_ready.go:81] duration metric: took 8.2827ms for pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace to be "Ready" ...
	I0419 18:38:37.774459   13300 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:38:37.774608   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-348000
	I0419 18:38:37.774678   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:37.774678   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:37.774703   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:37.775371   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:37.775371   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:37.775371   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:37.775371   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:37.775371   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:37.775371   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:37 GMT
	I0419 18:38:37.777282   13300 round_trippers.go:580]     Audit-Id: 24a33862-b401-4dd5-ba9b-1367e1297ba2
	I0419 18:38:37.777282   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:37.777566   13300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-348000","namespace":"kube-system","uid":"af4afa87-c484-4b73-9a4d-e86ddcd90049","resourceVersion":"380","creationTimestamp":"2024-04-20T01:35:08Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.42.231:2379","kubernetes.io/config.hash":"8fef0b92f87f018a58c19217fdf5d6e1","kubernetes.io/config.mirror":"8fef0b92f87f018a58c19217fdf5d6e1","kubernetes.io/config.seen":"2024-04-20T01:35:08.321891557Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0419 18:38:37.777763   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:38:37.777763   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:37.777763   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:37.777763   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:37.779265   13300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 18:38:37.779265   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:37.779265   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:37.779265   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:37.779265   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:37.779265   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:37 GMT
	I0419 18:38:37.779265   13300 round_trippers.go:580]     Audit-Id: db7494f6-e3c7-4f3f-960d-5a02b7615bc8
	I0419 18:38:37.779265   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:37.781497   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"420","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0419 18:38:37.781727   13300 pod_ready.go:92] pod "etcd-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 18:38:37.781727   13300 pod_ready.go:81] duration metric: took 7.1955ms for pod "etcd-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:38:37.781727   13300 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:38:37.781727   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-348000
	I0419 18:38:37.781727   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:37.781727   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:37.781727   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:37.786358   13300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:38:37.786397   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:37.786397   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:37.786446   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:37 GMT
	I0419 18:38:37.786446   13300 round_trippers.go:580]     Audit-Id: c2cb91b2-6965-4471-aaa8-a6025f1a937c
	I0419 18:38:37.786446   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:37.786446   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:37.786446   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:37.786664   13300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-348000","namespace":"kube-system","uid":"18f5e677-6a96-47ee-9f61-60ab9445eb92","resourceVersion":"383","creationTimestamp":"2024-04-20T01:35:08Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.42.231:8443","kubernetes.io/config.hash":"89aa15d5f8e328791151d96100a36918","kubernetes.io/config.mirror":"89aa15d5f8e328791151d96100a36918","kubernetes.io/config.seen":"2024-04-20T01:35:08.321896559Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0419 18:38:37.787239   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:38:37.787239   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:37.787239   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:37.787239   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:37.791287   13300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:38:37.791287   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:37.791287   13300 round_trippers.go:580]     Audit-Id: 912d9c04-4974-42c8-ace7-bc775f442779
	I0419 18:38:37.791287   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:37.791287   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:37.791287   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:37.791287   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:37.791287   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:37 GMT
	I0419 18:38:37.791287   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"420","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0419 18:38:37.791854   13300 pod_ready.go:92] pod "kube-apiserver-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 18:38:37.792094   13300 pod_ready.go:81] duration metric: took 10.367ms for pod "kube-apiserver-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:38:37.792094   13300 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:38:37.792094   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-348000
	I0419 18:38:37.792094   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:37.792094   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:37.792094   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:37.793964   13300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 18:38:37.793964   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:37.793964   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:37.793964   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:37 GMT
	I0419 18:38:37.793964   13300 round_trippers.go:580]     Audit-Id: 6debd7e7-7a8c-4c29-8b35-b850b1177411
	I0419 18:38:37.793964   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:37.793964   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:37.793964   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:37.795759   13300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-348000","namespace":"kube-system","uid":"299bb088-9795-4452-87a8-5e96bcacedde","resourceVersion":"381","creationTimestamp":"2024-04-20T01:35:08Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"30aa2729d0c65b9f89e1ae2d151edd9b","kubernetes.io/config.mirror":"30aa2729d0c65b9f89e1ae2d151edd9b","kubernetes.io/config.seen":"2024-04-20T01:35:08.321898260Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0419 18:38:37.796433   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:38:37.796490   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:37.796490   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:37.796490   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:37.798833   13300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:38:37.799457   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:37.799457   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:37.799488   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:37.799488   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:37.799488   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:37 GMT
	I0419 18:38:37.799488   13300 round_trippers.go:580]     Audit-Id: 9d517833-cfb6-41fa-9389-8f2937bacb8c
	I0419 18:38:37.799488   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:37.799488   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"420","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0419 18:38:37.800184   13300 pod_ready.go:92] pod "kube-controller-manager-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 18:38:37.800224   13300 pod_ready.go:81] duration metric: took 8.1306ms for pod "kube-controller-manager-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:38:37.800254   13300 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bjv9b" in "kube-system" namespace to be "Ready" ...
	I0419 18:38:37.962286   13300 request.go:629] Waited for 161.7217ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bjv9b
	I0419 18:38:37.962433   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bjv9b
	I0419 18:38:37.962503   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:37.962530   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:37.962530   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:37.963293   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:37.966859   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:37.966859   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:37.966859   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:37.966859   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:37.966859   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:37.966859   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:37 GMT
	I0419 18:38:37.966859   13300 round_trippers.go:580]     Audit-Id: a50e4180-e51e-40a2-bf36-aa1440374743
	I0419 18:38:37.967168   13300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bjv9b","generateName":"kube-proxy-","namespace":"kube-system","uid":"3e909d14-543a-4734-8c17-7e2b8188553d","resourceVersion":"601","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5836 chars]
	I0419 18:38:38.159844   13300 request.go:629] Waited for 191.7401ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:38.159844   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:38:38.159844   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:38.159844   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:38.159844   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:38.160374   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:38.160374   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:38.160374   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:38.160374   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:38.163842   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:38 GMT
	I0419 18:38:38.163842   13300 round_trippers.go:580]     Audit-Id: 8a4812db-f6eb-41fd-936a-fcbfd3b12cf9
	I0419 18:38:38.163842   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:38.163842   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:38.164059   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"617","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3263 chars]
	I0419 18:38:38.164551   13300 pod_ready.go:92] pod "kube-proxy-bjv9b" in "kube-system" namespace has status "Ready":"True"
	I0419 18:38:38.164551   13300 pod_ready.go:81] duration metric: took 364.2955ms for pod "kube-proxy-bjv9b" in "kube-system" namespace to be "Ready" ...
	I0419 18:38:38.164551   13300 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kj76x" in "kube-system" namespace to be "Ready" ...
	I0419 18:38:38.363375   13300 request.go:629] Waited for 198.8241ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kj76x
	I0419 18:38:38.363705   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kj76x
	I0419 18:38:38.363705   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:38.363705   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:38.363705   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:38.366687   13300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:38:38.366687   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:38.366687   13300 round_trippers.go:580]     Audit-Id: 6ee3d1ce-1718-4fb0-a2bd-1d1521aba44b
	I0419 18:38:38.366687   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:38.366687   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:38.366687   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:38.366687   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:38.366687   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:38 GMT
	I0419 18:38:38.368778   13300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kj76x","generateName":"kube-proxy-","namespace":"kube-system","uid":"274342c4-c21f-4279-b0ea-743d8e2c1463","resourceVersion":"377","creationTimestamp":"2024-04-20T01:35:22Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0419 18:38:38.573766   13300 request.go:629] Waited for 204.9219ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:38:38.573796   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:38:38.573796   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:38.573796   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:38.573796   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:38.575498   13300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 18:38:38.575498   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:38.577864   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:38.577864   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:38.577864   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:38 GMT
	I0419 18:38:38.577864   13300 round_trippers.go:580]     Audit-Id: 309193a9-f5a2-4b07-8883-8a77b9537c86
	I0419 18:38:38.577864   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:38.577864   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:38.577979   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"420","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0419 18:38:38.578705   13300 pod_ready.go:92] pod "kube-proxy-kj76x" in "kube-system" namespace has status "Ready":"True"
	I0419 18:38:38.578705   13300 pod_ready.go:81] duration metric: took 414.1533ms for pod "kube-proxy-kj76x" in "kube-system" namespace to be "Ready" ...
	I0419 18:38:38.578705   13300 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:38:38.755846   13300 request.go:629] Waited for 176.8958ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348000
	I0419 18:38:38.756066   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348000
	I0419 18:38:38.756066   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:38.756148   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:38.756148   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:38.760990   13300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:38:38.764011   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:38.764011   13300 round_trippers.go:580]     Audit-Id: 0d15acbd-23c2-47d7-914b-c6de34117c21
	I0419 18:38:38.764011   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:38.764011   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:38.764011   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:38.764011   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:38.764011   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:38 GMT
	I0419 18:38:38.764011   13300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-348000","namespace":"kube-system","uid":"000cfafe-a513-4738-9de2-3c25244b72be","resourceVersion":"382","creationTimestamp":"2024-04-20T01:35:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"92813b2aed63b63058d3fd06709fa24e","kubernetes.io/config.mirror":"92813b2aed63b63058d3fd06709fa24e","kubernetes.io/config.seen":"2024-04-20T01:35:08.321899460Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0419 18:38:38.961303   13300 request.go:629] Waited for 196.0681ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:38:38.961303   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes/multinode-348000
	I0419 18:38:38.961303   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:38.961303   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:38.961303   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:38.961951   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:38.961951   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:38.961951   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:38.961951   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:38.961951   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:38.961951   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:38.961951   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:38 GMT
	I0419 18:38:38.961951   13300 round_trippers.go:580]     Audit-Id: c129b214-8de1-40d2-864a-c387d91d789c
	I0419 18:38:38.969384   13300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"420","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0419 18:38:38.970135   13300 pod_ready.go:92] pod "kube-scheduler-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 18:38:38.970216   13300 pod_ready.go:81] duration metric: took 391.5104ms for pod "kube-scheduler-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:38:38.970216   13300 pod_ready.go:38] duration metric: took 1.2143588s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 18:38:38.970216   13300 system_svc.go:44] waiting for kubelet service to be running ....
	I0419 18:38:38.984056   13300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 18:38:39.013470   13300 system_svc.go:56] duration metric: took 40.2181ms WaitForService to wait for kubelet
	I0419 18:38:39.013470   13300 kubeadm.go:576] duration metric: took 19.5491272s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 18:38:39.013583   13300 node_conditions.go:102] verifying NodePressure condition ...
	I0419 18:38:39.155870   13300 request.go:629] Waited for 141.9024ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.231:8443/api/v1/nodes
	I0419 18:38:39.155870   13300 round_trippers.go:463] GET https://172.19.42.231:8443/api/v1/nodes
	I0419 18:38:39.155870   13300 round_trippers.go:469] Request Headers:
	I0419 18:38:39.155870   13300 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:38:39.155870   13300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:38:39.156630   13300 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 18:38:39.156630   13300 round_trippers.go:577] Response Headers:
	I0419 18:38:39.156630   13300 round_trippers.go:580]     Content-Type: application/json
	I0419 18:38:39.156630   13300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:38:39.156630   13300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:38:39.156630   13300 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:38:39 GMT
	I0419 18:38:39.156630   13300 round_trippers.go:580]     Audit-Id: 33029f10-8d5d-4d80-8d41-c89740ebec37
	I0419 18:38:39.156630   13300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:38:39.160686   13300 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"619"},"items":[{"metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"420","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9267 chars]
	I0419 18:38:39.161700   13300 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 18:38:39.161700   13300 node_conditions.go:123] node cpu capacity is 2
	I0419 18:38:39.161792   13300 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 18:38:39.161792   13300 node_conditions.go:123] node cpu capacity is 2
	I0419 18:38:39.161792   13300 node_conditions.go:105] duration metric: took 148.208ms to run NodePressure ...
	I0419 18:38:39.161792   13300 start.go:240] waiting for startup goroutines ...
	I0419 18:38:39.161792   13300 start.go:254] writing updated cluster config ...
	I0419 18:38:39.173913   13300 ssh_runner.go:195] Run: rm -f paused
	I0419 18:38:39.323316   13300 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0419 18:38:39.326891   13300 out.go:177] * Done! kubectl is now configured to use "multinode-348000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 20 01:35:37 multinode-348000 dockerd[1331]: time="2024-04-20T01:35:37.540512098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 01:35:37 multinode-348000 dockerd[1331]: time="2024-04-20T01:35:37.563111210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 20 01:35:37 multinode-348000 dockerd[1331]: time="2024-04-20T01:35:37.563167913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 20 01:35:37 multinode-348000 dockerd[1331]: time="2024-04-20T01:35:37.563179813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 01:35:37 multinode-348000 dockerd[1331]: time="2024-04-20T01:35:37.563261418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 01:35:37 multinode-348000 cri-dockerd[1230]: time="2024-04-20T01:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/da1d06ec238f43c7ad43cae75e142a6d15b9c8fb69f88ad8079f167f3f3a6fd4/resolv.conf as [nameserver 172.19.32.1]"
	Apr 20 01:35:37 multinode-348000 cri-dockerd[1230]: time="2024-04-20T01:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2dd294415aae178d6b9bed0368d49bedc6d0afa8f5b9ad0011c73ffcb2c24b3c/resolv.conf as [nameserver 172.19.32.1]"
	Apr 20 01:35:37 multinode-348000 dockerd[1331]: time="2024-04-20T01:35:37.945280160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 20 01:35:37 multinode-348000 dockerd[1331]: time="2024-04-20T01:35:37.945495672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 20 01:35:37 multinode-348000 dockerd[1331]: time="2024-04-20T01:35:37.945521573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 01:35:37 multinode-348000 dockerd[1331]: time="2024-04-20T01:35:37.945641980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 01:35:38 multinode-348000 dockerd[1331]: time="2024-04-20T01:35:38.033482932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 20 01:35:38 multinode-348000 dockerd[1331]: time="2024-04-20T01:35:38.033906043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 20 01:35:38 multinode-348000 dockerd[1331]: time="2024-04-20T01:35:38.034064809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 01:35:38 multinode-348000 dockerd[1331]: time="2024-04-20T01:35:38.035608684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 01:39:04 multinode-348000 dockerd[1331]: time="2024-04-20T01:39:04.453081370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 20 01:39:04 multinode-348000 dockerd[1331]: time="2024-04-20T01:39:04.453596663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 20 01:39:04 multinode-348000 dockerd[1331]: time="2024-04-20T01:39:04.453648063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 01:39:04 multinode-348000 dockerd[1331]: time="2024-04-20T01:39:04.453800461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 01:39:04 multinode-348000 cri-dockerd[1230]: time="2024-04-20T01:39:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/476e3efb38684054cbc21c027cf1ddd3f9ca47bb829786f8636fd877fd4b2f81/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 20 01:39:05 multinode-348000 cri-dockerd[1230]: time="2024-04-20T01:39:05Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 20 01:39:05 multinode-348000 dockerd[1331]: time="2024-04-20T01:39:05.950429472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 20 01:39:05 multinode-348000 dockerd[1331]: time="2024-04-20T01:39:05.950608372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 20 01:39:05 multinode-348000 dockerd[1331]: time="2024-04-20T01:39:05.950638272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 20 01:39:05 multinode-348000 dockerd[1331]: time="2024-04-20T01:39:05.952098176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d8afb3e1fb946       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   48 seconds ago      Running             busybox                   0                   476e3efb38684       busybox-fc5497c4f-xnz2k
	627b84abf45cd       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   2dd294415aae1       coredns-7db6d8ff4d-7w477
	e248c230a4aa3       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   da1d06ec238f4       storage-provisioner
	8a37c65d06fab       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              4 minutes ago       Running             kindnet-cni               0                   dd9e5fae3950c       kindnet-s4fsr
	a6586791413d0       a0bf559e280cf                                                                                         4 minutes ago       Running             kube-proxy                0                   7935893e9f22a       kube-proxy-kj76x
	9638ddcd54285       c7aad43836fa5                                                                                         4 minutes ago       Running             kube-controller-manager   0                   6e420625b84be       kube-controller-manager-multinode-348000
	53f6a00490766       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      0                   00d48e11227ef       etcd-multinode-348000
	490377504e57c       c42f13656d0b2                                                                                         4 minutes ago       Running             kube-apiserver            0                   187cb57784f4e       kube-apiserver-multinode-348000
	e476774b8f77e       259c8277fcbbc                                                                                         4 minutes ago       Running             kube-scheduler            0                   e5d733991bf1a       kube-scheduler-multinode-348000
	
	
	==> coredns [627b84abf45c] <==
	[INFO] 10.244.1.2:41687 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000198401s
	[INFO] 10.244.0.3:46929 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003044s
	[INFO] 10.244.0.3:35877 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000325701s
	[INFO] 10.244.0.3:53705 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000318601s
	[INFO] 10.244.0.3:40560 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164401s
	[INFO] 10.244.0.3:53239 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001239s
	[INFO] 10.244.0.3:39754 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001464s
	[INFO] 10.244.0.3:41397 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001668s
	[INFO] 10.244.0.3:49126 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001646s
	[INFO] 10.244.1.2:37850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115501s
	[INFO] 10.244.1.2:44063 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001443s
	[INFO] 10.244.1.2:39924 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000607s
	[INFO] 10.244.1.2:53244 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000622s
	[INFO] 10.244.0.3:52017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001879s
	[INFO] 10.244.0.3:55488 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000814s
	[INFO] 10.244.0.3:57536 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000778s
	[INFO] 10.244.0.3:45454 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001788s
	[INFO] 10.244.1.2:52247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001095s
	[INFO] 10.244.1.2:46954 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001143s
	[INFO] 10.244.1.2:47574 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098701s
	[INFO] 10.244.1.2:36658 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000170301s
	[INFO] 10.244.0.3:35421 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001002s
	[INFO] 10.244.0.3:41995 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132201s
	[INFO] 10.244.0.3:36431 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001956s
	[INFO] 10.244.0.3:38168 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000222s
	
	
	==> describe nodes <==
	Name:               multinode-348000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-348000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=multinode-348000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_19T18_35_09_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 01:35:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-348000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 01:39:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 01:39:13 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 01:39:13 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 01:39:13 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 01:39:13 +0000   Sat, 20 Apr 2024 01:35:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.42.231
	  Hostname:    multinode-348000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c46c605b4e0a475989ad3695e6e6f13e
	  System UUID:                fdc3fb6e-1818-9a4e-b496-b7ed0124a8e6
	  Boot ID:                    3bbdf518-f6c9-4286-907c-4562e2db8750
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xnz2k                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 coredns-7db6d8ff4d-7w477                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m30s
	  kube-system                 etcd-multinode-348000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m45s
	  kube-system                 kindnet-s4fsr                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m31s
	  kube-system                 kube-apiserver-multinode-348000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-controller-manager-multinode-348000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-proxy-kj76x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 kube-scheduler-multinode-348000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m27s  kube-proxy       
	  Normal  Starting                 4m45s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m45s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m45s  kubelet          Node multinode-348000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s  kubelet          Node multinode-348000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s  kubelet          Node multinode-348000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m31s  node-controller  Node multinode-348000 event: Registered Node multinode-348000 in Controller
	  Normal  NodeReady                4m17s  kubelet          Node multinode-348000 status is now: NodeReady
	
	
	Name:               multinode-348000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-348000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=multinode-348000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T18_38_19_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 01:38:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-348000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 01:39:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 01:39:19 +0000   Sat, 20 Apr 2024 01:38:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 01:39:19 +0000   Sat, 20 Apr 2024 01:38:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 01:39:19 +0000   Sat, 20 Apr 2024 01:38:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 01:39:19 +0000   Sat, 20 Apr 2024 01:38:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.32.249
	  Hostname:    multinode-348000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 ea453a3100b34d789441206109708446
	  System UUID:                9f7972f9-8942-ef4f-b0cf-029b405f5832
	  Boot ID:                    d8ef37df-1396-47c1-8bea-04667e5bc60b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2d5hs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 kindnet-s98rh              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      95s
	  kube-system                 kube-proxy-bjv9b           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 84s                kube-proxy       
	  Normal  NodeHasSufficientMemory  95s (x2 over 95s)  kubelet          Node multinode-348000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s (x2 over 95s)  kubelet          Node multinode-348000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s (x2 over 95s)  kubelet          Node multinode-348000-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           91s                node-controller  Node multinode-348000-m02 event: Registered Node multinode-348000-m02 in Controller
	  Normal  NodeReady                76s                kubelet          Node multinode-348000-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.855924] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr20 01:34] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.179070] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[ +29.869631] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.104263] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.581710] systemd-fstab-generator[987]: Ignoring "noauto" option for root device
	[  +0.200925] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[  +0.246636] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +2.773612] systemd-fstab-generator[1183]: Ignoring "noauto" option for root device
	[  +0.194231] systemd-fstab-generator[1195]: Ignoring "noauto" option for root device
	[  +0.193858] systemd-fstab-generator[1207]: Ignoring "noauto" option for root device
	[  +0.285693] systemd-fstab-generator[1222]: Ignoring "noauto" option for root device
	[ +11.787282] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[  +0.103082] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.741815] systemd-fstab-generator[1513]: Ignoring "noauto" option for root device
	[  +7.076695] systemd-fstab-generator[1716]: Ignoring "noauto" option for root device
	[  +0.104889] kauditd_printk_skb: 73 callbacks suppressed
	[Apr20 01:35] systemd-fstab-generator[2122]: Ignoring "noauto" option for root device
	[  +0.140908] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.706321] systemd-fstab-generator[2314]: Ignoring "noauto" option for root device
	[  +0.220278] kauditd_printk_skb: 12 callbacks suppressed
	[  +9.570247] kauditd_printk_skb: 51 callbacks suppressed
	[Apr20 01:39] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [53f6a0049076] <==
	{"level":"info","ts":"2024-04-20T01:35:03.240644Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:35:03.242093Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:35:03.245708Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-20T01:35:03.242196Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:35:03.242328Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-20T01:35:03.262422Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-20T01:35:03.264921Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:35:03.265161Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:35:03.265441Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:35:03.267598Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.42.231:2379"}
	2024/04/20 01:35:08 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-20T01:35:52.431139Z","caller":"traceutil/trace.go:171","msg":"trace[2084393189] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"146.874241ms","start":"2024-04-20T01:35:52.284247Z","end":"2024-04-20T01:35:52.431122Z","steps":["trace[2084393189] 'process raft request'  (duration: 146.557894ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T01:38:11.929135Z","caller":"traceutil/trace.go:171","msg":"trace[1415948215] transaction","detail":"{read_only:false; response_revision:548; number_of_response:1; }","duration":"237.164087ms","start":"2024-04-20T01:38:11.691907Z","end":"2024-04-20T01:38:11.929071Z","steps":["trace[1415948215] 'process raft request'  (duration: 237.02179ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T01:38:12.316126Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.095955ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9253928535049865333 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-348000\" mod_revision:541 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-348000\" value_size:496 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-348000\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-20T01:38:12.31642Z","caller":"traceutil/trace.go:171","msg":"trace[1765063898] transaction","detail":"{read_only:false; response_revision:549; number_of_response:1; }","duration":"360.315267ms","start":"2024-04-20T01:38:11.956088Z","end":"2024-04-20T01:38:12.316403Z","steps":["trace[1765063898] 'process raft request'  (duration: 224.39623ms)","trace[1765063898] 'compare'  (duration: 135.009957ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-20T01:38:12.316499Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T01:38:11.956067Z","time spent":"360.403364ms","remote":"127.0.0.1:34874","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":553,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-348000\" mod_revision:541 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-348000\" value_size:496 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-348000\" > >"}
	{"level":"warn","ts":"2024-04-20T01:38:12.87211Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"391.132686ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9253928535049865336 > lease_revoke:<id:006c8ef9247f8837>","response":"size:28"}
	{"level":"info","ts":"2024-04-20T01:38:12.872333Z","caller":"traceutil/trace.go:171","msg":"trace[2094951406] linearizableReadLoop","detail":"{readStateIndex:597; appliedIndex:596; }","duration":"375.841031ms","start":"2024-04-20T01:38:12.496478Z","end":"2024-04-20T01:38:12.872319Z","steps":["trace[2094951406] 'read index received'  (duration: 36.1µs)","trace[2094951406] 'applied index is now lower than readState.Index'  (duration: 375.803131ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-20T01:38:12.872551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"376.093424ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-20T01:38:12.872663Z","caller":"traceutil/trace.go:171","msg":"trace[1282802448] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; response_count:0; response_revision:549; }","duration":"376.651612ms","start":"2024-04-20T01:38:12.496001Z","end":"2024-04-20T01:38:12.872653Z","steps":["trace[1282802448] 'agreement among raft nodes before linearized reading'  (duration: 376.381618ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T01:38:12.872701Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T01:38:12.495967Z","time spent":"376.72111ms","remote":"127.0.0.1:34768","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":3,"response size":30,"request content":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true "}
	{"level":"warn","ts":"2024-04-20T01:38:28.072668Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"342.918205ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-348000-m02\" ","response":"range_response_count:1 size:2847"}
	{"level":"info","ts":"2024-04-20T01:38:28.072758Z","caller":"traceutil/trace.go:171","msg":"trace[548170216] range","detail":"{range_begin:/registry/minions/multinode-348000-m02; range_end:; response_count:1; response_revision:594; }","duration":"343.058803ms","start":"2024-04-20T01:38:27.729682Z","end":"2024-04-20T01:38:28.072741Z","steps":["trace[548170216] 'range keys from in-memory index tree'  (duration: 342.69471ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T01:38:28.072787Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T01:38:27.729665Z","time spent":"343.113603ms","remote":"127.0.0.1:34778","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":2870,"request content":"key:\"/registry/minions/multinode-348000-m02\" "}
	{"level":"info","ts":"2024-04-20T01:38:28.22779Z","caller":"traceutil/trace.go:171","msg":"trace[1689627356] transaction","detail":"{read_only:false; response_revision:595; number_of_response:1; }","duration":"149.97003ms","start":"2024-04-20T01:38:28.0778Z","end":"2024-04-20T01:38:28.22777Z","steps":["trace[1689627356] 'process raft request'  (duration: 149.715935ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:39:53 up 6 min,  0 users,  load average: 0.46, 0.36, 0.18
	Linux multinode-348000 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8a37c65d06fa] <==
	I0420 01:38:43.570302       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0420 01:38:53.584549       1 main.go:223] Handling node with IPs: map[172.19.42.231:{}]
	I0420 01:38:53.585071       1 main.go:227] handling current node
	I0420 01:38:53.585354       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0420 01:38:53.585423       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0420 01:39:03.597739       1 main.go:223] Handling node with IPs: map[172.19.42.231:{}]
	I0420 01:39:03.597772       1 main.go:227] handling current node
	I0420 01:39:03.597785       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0420 01:39:03.597791       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0420 01:39:13.603370       1 main.go:223] Handling node with IPs: map[172.19.42.231:{}]
	I0420 01:39:13.603492       1 main.go:227] handling current node
	I0420 01:39:13.603506       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0420 01:39:13.603515       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0420 01:39:23.615793       1 main.go:223] Handling node with IPs: map[172.19.42.231:{}]
	I0420 01:39:23.616016       1 main.go:227] handling current node
	I0420 01:39:23.616031       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0420 01:39:23.616040       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0420 01:39:33.621065       1 main.go:223] Handling node with IPs: map[172.19.42.231:{}]
	I0420 01:39:33.621177       1 main.go:227] handling current node
	I0420 01:39:33.621192       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0420 01:39:33.621200       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0420 01:39:43.631355       1 main.go:223] Handling node with IPs: map[172.19.42.231:{}]
	I0420 01:39:43.631394       1 main.go:227] handling current node
	I0420 01:39:43.631406       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0420 01:39:43.631412       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [490377504e57] <==
	I0420 01:35:07.289055       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0420 01:35:07.302035       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.42.231]
	I0420 01:35:07.303465       1 controller.go:615] quota admission added evaluator for: endpoints
	I0420 01:35:07.315588       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0420 01:35:07.971270       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0420 01:35:08.117616       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0420 01:35:08.117735       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0420 01:35:08.119526       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0420 01:35:08.119868       1 timeout.go:142] post-timeout activity - time-elapsed: 2.768332ms, POST "/api/v1/namespaces/default/events" result: <nil>
	E0420 01:35:08.121169       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 2.19902ms, panicked: false, err: context canceled, panic-reason: <nil>
	I0420 01:35:08.293906       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0420 01:35:08.343351       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0420 01:35:08.366148       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0420 01:35:22.689698       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0420 01:35:22.848467       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0420 01:39:09.629967       1 conn.go:339] Error on socket receive: read tcp 172.19.42.231:8443->172.19.32.1:52806: use of closed network connection
	E0420 01:39:10.092021       1 conn.go:339] Error on socket receive: read tcp 172.19.42.231:8443->172.19.32.1:52808: use of closed network connection
	E0420 01:39:10.599427       1 conn.go:339] Error on socket receive: read tcp 172.19.42.231:8443->172.19.32.1:52810: use of closed network connection
	E0420 01:39:11.069565       1 conn.go:339] Error on socket receive: read tcp 172.19.42.231:8443->172.19.32.1:52812: use of closed network connection
	E0420 01:39:11.521895       1 conn.go:339] Error on socket receive: read tcp 172.19.42.231:8443->172.19.32.1:52814: use of closed network connection
	E0420 01:39:11.971404       1 conn.go:339] Error on socket receive: read tcp 172.19.42.231:8443->172.19.32.1:52816: use of closed network connection
	E0420 01:39:12.788364       1 conn.go:339] Error on socket receive: read tcp 172.19.42.231:8443->172.19.32.1:52819: use of closed network connection
	E0420 01:39:23.272038       1 conn.go:339] Error on socket receive: read tcp 172.19.42.231:8443->172.19.32.1:52821: use of closed network connection
	E0420 01:39:23.708614       1 conn.go:339] Error on socket receive: read tcp 172.19.42.231:8443->172.19.32.1:52824: use of closed network connection
	E0420 01:39:34.172569       1 conn.go:339] Error on socket receive: read tcp 172.19.42.231:8443->172.19.32.1:52826: use of closed network connection
	
	
	==> kube-controller-manager [9638ddcd5428] <==
	I0420 01:35:23.169115       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0420 01:35:23.179171       1 shared_informer.go:320] Caches are synced for garbage collector
	I0420 01:35:23.263116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="374.4156ms"
	I0420 01:35:23.291471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.172623ms"
	I0420 01:35:23.291547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.106µs"
	I0420 01:35:23.578182       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="73.803114ms"
	I0420 01:35:23.630233       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.666311ms"
	I0420 01:35:23.630467       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="183.125µs"
	I0420 01:35:36.906373       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="291.116µs"
	I0420 01:35:36.934151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="76.104µs"
	I0420 01:35:37.573034       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0420 01:35:39.217159       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.488µs"
	I0420 01:35:39.265403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.862669ms"
	I0420 01:35:39.266023       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="552.786µs"
	I0420 01:38:18.575680       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m02\" does not exist"
	I0420 01:38:18.590900       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m02" podCIDRs=["10.244.1.0/24"]
	I0420 01:38:22.613051       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m02"
	I0420 01:38:37.669535       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0420 01:39:03.031296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.090021ms"
	I0420 01:39:03.053897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.363721ms"
	I0420 01:39:03.054543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.499µs"
	I0420 01:39:05.783927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.434034ms"
	I0420 01:39:05.784276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="108.901µs"
	I0420 01:39:07.103598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.163039ms"
	I0420 01:39:07.104054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.4µs"
	
	
	==> kube-proxy [a6586791413d] <==
	I0420 01:35:26.120497       1 server_linux.go:69] "Using iptables proxy"
	I0420 01:35:26.156956       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.42.231"]
	I0420 01:35:26.208282       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 01:35:26.208472       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 01:35:26.208501       1 server_linux.go:165] "Using iptables Proxier"
	I0420 01:35:26.214693       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 01:35:26.216114       1 server.go:872] "Version info" version="v1.30.0"
	I0420 01:35:26.216181       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:35:26.219192       1 config.go:192] "Starting service config controller"
	I0420 01:35:26.219810       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 01:35:26.220079       1 config.go:101] "Starting endpoint slice config controller"
	I0420 01:35:26.220093       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 01:35:26.221802       1 config.go:319] "Starting node config controller"
	I0420 01:35:26.221980       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 01:35:26.320313       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 01:35:26.320380       1 shared_informer.go:320] Caches are synced for service config
	I0420 01:35:26.322323       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e476774b8f77] <==
	W0420 01:35:06.213142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0420 01:35:06.213279       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0420 01:35:06.278808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0420 01:35:06.279232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0420 01:35:06.310265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0420 01:35:06.311126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0420 01:35:06.333128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0420 01:35:06.333531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0420 01:35:06.355993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0420 01:35:06.356053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0420 01:35:06.356154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0420 01:35:06.356365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0420 01:35:06.490128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0420 01:35:06.490240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0420 01:35:06.496247       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 01:35:06.496709       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 01:35:06.552817       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 01:35:06.552917       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 01:35:06.607496       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0420 01:35:06.607914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0420 01:35:06.608255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0420 01:35:06.608488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0420 01:35:06.623642       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0420 01:35:06.624029       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0420 01:35:09.746203       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 20 01:35:39 multinode-348000 kubelet[2129]: I0420 01:35:39.242930    2129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7w477" podStartSLOduration=16.242896844 podStartE2EDuration="16.242896844s" podCreationTimestamp="2024-04-20 01:35:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-20 01:35:39.215875753 +0000 UTC m=+31.069684822" watchObservedRunningTime="2024-04-20 01:35:39.242896844 +0000 UTC m=+31.096705813"
	Apr 20 01:36:08 multinode-348000 kubelet[2129]: E0420 01:36:08.427395    2129 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:36:08 multinode-348000 kubelet[2129]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:36:08 multinode-348000 kubelet[2129]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:36:08 multinode-348000 kubelet[2129]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:36:08 multinode-348000 kubelet[2129]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:37:08 multinode-348000 kubelet[2129]: E0420 01:37:08.431005    2129 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:37:08 multinode-348000 kubelet[2129]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:37:08 multinode-348000 kubelet[2129]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:37:08 multinode-348000 kubelet[2129]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:37:08 multinode-348000 kubelet[2129]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:38:08 multinode-348000 kubelet[2129]: E0420 01:38:08.427286    2129 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:38:08 multinode-348000 kubelet[2129]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:38:08 multinode-348000 kubelet[2129]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:38:08 multinode-348000 kubelet[2129]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:38:08 multinode-348000 kubelet[2129]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:39:03 multinode-348000 kubelet[2129]: I0420 01:39:03.016969    2129 topology_manager.go:215] "Topology Admit Handler" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916" podNamespace="default" podName="busybox-fc5497c4f-xnz2k"
	Apr 20 01:39:03 multinode-348000 kubelet[2129]: W0420 01:39:03.023417    2129 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-348000" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-348000' and this object
	Apr 20 01:39:03 multinode-348000 kubelet[2129]: E0420 01:39:03.023558    2129 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-348000" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-348000' and this object
	Apr 20 01:39:03 multinode-348000 kubelet[2129]: I0420 01:39:03.155040    2129 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d86jr\" (UniqueName: \"kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr\") pod \"busybox-fc5497c4f-xnz2k\" (UID: \"7aa2ff69-7aaf-48d7-905e-15ad43a94916\") " pod="default/busybox-fc5497c4f-xnz2k"
	Apr 20 01:39:08 multinode-348000 kubelet[2129]: E0420 01:39:08.429084    2129 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:39:08 multinode-348000 kubelet[2129]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:39:08 multinode-348000 kubelet[2129]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:39:08 multinode-348000 kubelet[2129]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:39:08 multinode-348000 kubelet[2129]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 18:39:45.770994    7044 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-348000 -n multinode-348000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-348000 -n multinode-348000: (11.5811079s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-348000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (55.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (521.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-348000
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-348000
E0419 18:55:44.604931    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-348000: (1m38.3299984s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-348000 --wait=true -v=8 --alsologtostderr
E0419 19:00:44.601035    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-348000 --wait=true -v=8 --alsologtostderr: exit status 1 (6m16.1792863s)

                                                
                                                
-- stdout --
	* [multinode-348000] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-348000" primary control-plane node in "multinode-348000" cluster
	* Restarting existing hyperv VM for "multinode-348000" ...
	* Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-348000-m02" worker node in "multinode-348000" cluster
	* Restarting existing hyperv VM for "multinode-348000-m02" ...
	* Found network options:
	  - NO_PROXY=172.19.42.24
	  - NO_PROXY=172.19.42.24
	* Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	  - env NO_PROXY=172.19.42.24
	* Verifying Kubernetes components...
	
	* Starting "multinode-348000-m03" worker node in "multinode-348000" cluster
	* Restarting existing hyperv VM for "multinode-348000-m03" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 18:55:51.911922   14960 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0419 18:55:51.913151   14960 out.go:291] Setting OutFile to fd 948 ...
	I0419 18:55:51.913922   14960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 18:55:51.913922   14960 out.go:304] Setting ErrFile to fd 868...
	I0419 18:55:51.913922   14960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 18:55:51.980851   14960 out.go:298] Setting JSON to false
	I0419 18:55:51.989167   14960 start.go:129] hostinfo: {"hostname":"minikube1","uptime":16610,"bootTime":1713561541,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0419 18:55:51.989167   14960 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 18:55:52.117827   14960 out.go:177] * [multinode-348000] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0419 18:55:52.194388   14960 notify.go:220] Checking for updates...
	I0419 18:55:52.292331   14960 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 18:55:52.465492   14960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 18:55:52.559397   14960 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0419 18:55:52.632405   14960 out.go:177]   - MINIKUBE_LOCATION=18703
	I0419 18:55:52.885380   14960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 18:55:52.993344   14960 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 18:55:52.993641   14960 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 18:55:58.284530   14960 out.go:177] * Using the hyperv driver based on existing profile
	I0419 18:55:58.288651   14960 start.go:297] selected driver: hyperv
	I0419 18:55:58.288651   14960 start.go:901] validating driver "hyperv" against &{Name:multinode-348000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-348000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.42.231 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.32.249 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.37.59 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:fals
e istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 18:55:58.289069   14960 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 18:55:58.342162   14960 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 18:55:58.342162   14960 cni.go:84] Creating CNI manager for ""
	I0419 18:55:58.342162   14960 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0419 18:55:58.343716   14960 start.go:340] cluster config:
	{Name:multinode-348000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-348000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.42.231 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.32.249 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.37.59 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logview
er:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 18:55:58.343716   14960 iso.go:125] acquiring lock: {Name:mk297f2abb67cbbcd36490c866afe693892d0c05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 18:55:58.349258   14960 out.go:177] * Starting "multinode-348000" primary control-plane node in "multinode-348000" cluster
	I0419 18:55:58.385284   14960 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 18:55:58.385835   14960 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0419 18:55:58.385835   14960 cache.go:56] Caching tarball of preloaded images
	I0419 18:55:58.386359   14960 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0419 18:55:58.386751   14960 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 18:55:58.386751   14960 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\config.json ...
	I0419 18:55:58.389868   14960 start.go:360] acquireMachinesLock for multinode-348000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 18:55:58.389868   14960 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-348000"
	I0419 18:55:58.389868   14960 start.go:96] Skipping create...Using existing machine configuration
	I0419 18:55:58.390399   14960 fix.go:54] fixHost starting: 
	I0419 18:55:58.390558   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:01.011301   14960 main.go:141] libmachine: [stdout =====>] : Off
	
	I0419 18:56:01.011301   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:01.011424   14960 fix.go:112] recreateIfNeeded on multinode-348000: state=Stopped err=<nil>
	W0419 18:56:01.011424   14960 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 18:56:01.017995   14960 out.go:177] * Restarting existing hyperv VM for "multinode-348000" ...
	I0419 18:56:01.021435   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-348000
	I0419 18:56:03.976518   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:56:03.976695   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:03.976749   14960 main.go:141] libmachine: Waiting for host to start...
	I0419 18:56:03.976808   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:06.149898   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:06.149898   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:06.150144   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:08.600938   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:56:08.600938   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:09.609308   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:11.749167   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:11.749167   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:11.749658   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:14.261893   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:56:14.261893   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:15.265289   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:17.405348   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:17.405348   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:17.405486   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:19.898109   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:56:19.898803   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:20.904928   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:23.053093   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:23.053286   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:23.053410   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:25.550050   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:56:25.550237   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:26.564114   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:28.712224   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:28.712224   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:28.712347   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:31.265712   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:56:31.265712   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:31.269700   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:33.327571   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:33.328392   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:33.328451   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:35.852529   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:56:35.852529   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:35.852807   14960 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\config.json ...
	I0419 18:56:35.855995   14960 machine.go:94] provisionDockerMachine start ...
	I0419 18:56:35.856119   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:37.883473   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:37.884484   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:37.884716   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:40.391568   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:56:40.392030   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:40.399762   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 18:56:40.400677   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.24 22 <nil> <nil>}
	I0419 18:56:40.400677   14960 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 18:56:40.534878   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0419 18:56:40.534878   14960 buildroot.go:166] provisioning hostname "multinode-348000"
	I0419 18:56:40.535043   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:42.572882   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:42.572882   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:42.573256   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:45.056072   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:56:45.056120   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:45.063532   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 18:56:45.063532   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.24 22 <nil> <nil>}
	I0419 18:56:45.063532   14960 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-348000 && echo "multinode-348000" | sudo tee /etc/hostname
	I0419 18:56:45.237666   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-348000
	
	I0419 18:56:45.238364   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:47.296593   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:47.296593   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:47.297059   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:49.751556   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:56:49.751965   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:49.757436   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 18:56:49.758179   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.24 22 <nil> <nil>}
	I0419 18:56:49.758179   14960 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-348000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-348000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-348000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 18:56:49.915457   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 18:56:49.915566   14960 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0419 18:56:49.915566   14960 buildroot.go:174] setting up certificates
	I0419 18:56:49.915687   14960 provision.go:84] configureAuth start
	I0419 18:56:49.915687   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:51.995337   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:51.995337   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:51.996328   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:54.489945   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:56:54.489945   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:54.491020   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:56.568951   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:56.568951   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:56.569150   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:59.080141   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:56:59.080869   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:59.080928   14960 provision.go:143] copyHostCerts
	I0419 18:56:59.080928   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0419 18:56:59.080928   14960 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0419 18:56:59.080928   14960 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0419 18:56:59.081531   14960 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0419 18:56:59.083448   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0419 18:56:59.083600   14960 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0419 18:56:59.083600   14960 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0419 18:56:59.083600   14960 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0419 18:56:59.085211   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0419 18:56:59.085459   14960 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0419 18:56:59.085459   14960 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0419 18:56:59.085459   14960 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0419 18:56:59.086717   14960 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-348000 san=[127.0.0.1 172.19.42.24 localhost minikube multinode-348000]
	I0419 18:56:59.212497   14960 provision.go:177] copyRemoteCerts
	I0419 18:56:59.227899   14960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 18:56:59.227899   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:57:01.260402   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:57:01.260588   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:01.260718   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:57:03.765930   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:57:03.765984   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:03.766368   14960 sshutil.go:53] new ssh client: &{IP:172.19.42.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\id_rsa Username:docker}
	I0419 18:57:03.874864   14960 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6469554s)
	I0419 18:57:03.874945   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0419 18:57:03.875102   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0419 18:57:03.923262   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0419 18:57:03.923890   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0419 18:57:03.970966   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0419 18:57:03.970966   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 18:57:04.021035   14960 provision.go:87] duration metric: took 14.1053189s to configureAuth
	I0419 18:57:04.021174   14960 buildroot.go:189] setting minikube options for container-runtime
	I0419 18:57:04.021977   14960 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 18:57:04.022083   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:57:06.072836   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:57:06.072836   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:06.073215   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:57:08.599860   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:57:08.599860   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:08.608221   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 18:57:08.608356   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.24 22 <nil> <nil>}
	I0419 18:57:08.608974   14960 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0419 18:57:08.734094   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0419 18:57:08.734649   14960 buildroot.go:70] root file system type: tmpfs
	I0419 18:57:08.734839   14960 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0419 18:57:08.734921   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:57:10.819245   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:57:10.819245   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:10.819599   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:57:13.335038   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:57:13.335504   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:13.342105   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 18:57:13.342922   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.24 22 <nil> <nil>}
	I0419 18:57:13.342922   14960 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0419 18:57:13.516079   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0419 18:57:13.516079   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:57:15.577194   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:57:15.577312   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:15.577312   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:57:18.061518   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:57:18.061518   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:18.067921   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 18:57:18.069954   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.24 22 <nil> <nil>}
	I0419 18:57:18.069954   14960 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0419 18:57:20.654789   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0419 18:57:20.654870   14960 machine.go:97] duration metric: took 44.7987431s to provisionDockerMachine
	I0419 18:57:20.654950   14960 start.go:293] postStartSetup for "multinode-348000" (driver="hyperv")
	I0419 18:57:20.654986   14960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 18:57:20.669220   14960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 18:57:20.669220   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:57:22.756526   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:57:22.756526   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:22.756873   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:57:25.261333   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:57:25.261333   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:25.262619   14960 sshutil.go:53] new ssh client: &{IP:172.19.42.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\id_rsa Username:docker}
	I0419 18:57:25.367494   14960 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6981981s)
	I0419 18:57:25.381544   14960 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 18:57:25.387394   14960 command_runner.go:130] > NAME=Buildroot
	I0419 18:57:25.387394   14960 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0419 18:57:25.387394   14960 command_runner.go:130] > ID=buildroot
	I0419 18:57:25.387394   14960 command_runner.go:130] > VERSION_ID=2023.02.9
	I0419 18:57:25.387394   14960 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0419 18:57:25.387394   14960 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 18:57:25.387394   14960 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0419 18:57:25.388650   14960 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0419 18:57:25.389048   14960 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> 34162.pem in /etc/ssl/certs
	I0419 18:57:25.389048   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /etc/ssl/certs/34162.pem
	I0419 18:57:25.406031   14960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 18:57:25.425469   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /etc/ssl/certs/34162.pem (1708 bytes)
	I0419 18:57:25.474243   14960 start.go:296] duration metric: took 4.819247s for postStartSetup
	I0419 18:57:25.474572   14960 fix.go:56] duration metric: took 1m27.0839897s for fixHost
	I0419 18:57:25.474772   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:57:27.537697   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:57:27.537697   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:27.537697   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:57:30.056748   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:57:30.056748   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:30.066919   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 18:57:30.067612   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.24 22 <nil> <nil>}
	I0419 18:57:30.067612   14960 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0419 18:57:30.198144   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713578250.184697143
	
	I0419 18:57:30.198144   14960 fix.go:216] guest clock: 1713578250.184697143
	I0419 18:57:30.198144   14960 fix.go:229] Guest: 2024-04-19 18:57:30.184697143 -0700 PDT Remote: 2024-04-19 18:57:25.4746874 -0700 PDT m=+93.668371801 (delta=4.710009743s)
	I0419 18:57:30.198144   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:57:32.243202   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:57:32.243202   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:32.243428   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:57:34.758113   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:57:34.758113   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:34.766893   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 18:57:34.767071   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.24 22 <nil> <nil>}
	I0419 18:57:34.767071   14960 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713578250
	I0419 18:57:34.908225   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: Sat Apr 20 01:57:30 UTC 2024
	
	I0419 18:57:34.908225   14960 fix.go:236] clock set: Sat Apr 20 01:57:30 UTC 2024
	 (err=<nil>)
	I0419 18:57:34.908225   14960 start.go:83] releasing machines lock for "multinode-348000", held for 1m36.5181541s
	I0419 18:57:34.908225   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:57:36.964392   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:57:36.964490   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:36.964591   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:57:39.472701   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:57:39.473145   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:39.480354   14960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 18:57:39.480354   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:57:39.491117   14960 ssh_runner.go:195] Run: cat /version.json
	I0419 18:57:39.491117   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:57:41.650254   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:57:41.650536   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:41.650682   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:57:41.684939   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:57:41.684939   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:41.684939   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:57:44.317789   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:57:44.318368   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:44.318626   14960 sshutil.go:53] new ssh client: &{IP:172.19.42.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\id_rsa Username:docker}
	I0419 18:57:44.343621   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:57:44.343621   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:44.343621   14960 sshutil.go:53] new ssh client: &{IP:172.19.42.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\id_rsa Username:docker}
	I0419 18:57:44.425103   14960 command_runner.go:130] > {"iso_version": "v1.33.0", "kicbase_version": "v0.0.43-1713236840-18649", "minikube_version": "v1.33.0", "commit": "4bd203f0c710e7fdd30539846cf2bc6624a2556d"}
	I0419 18:57:44.425332   14960 ssh_runner.go:235] Completed: cat /version.json: (4.9342037s)
	I0419 18:57:44.439607   14960 ssh_runner.go:195] Run: systemctl --version
	I0419 18:57:44.504695   14960 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0419 18:57:44.504695   14960 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0243304s)
	I0419 18:57:44.504942   14960 command_runner.go:130] > systemd 252 (252)
	I0419 18:57:44.505043   14960 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0419 18:57:44.517125   14960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0419 18:57:44.529313   14960 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0419 18:57:44.530005   14960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 18:57:44.546276   14960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 18:57:44.578981   14960 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0419 18:57:44.579096   14960 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 18:57:44.579096   14960 start.go:494] detecting cgroup driver to use...
	I0419 18:57:44.579205   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 18:57:44.618210   14960 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0419 18:57:44.633185   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0419 18:57:44.670614   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0419 18:57:44.692361   14960 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0419 18:57:44.707651   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0419 18:57:44.740305   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 18:57:44.777779   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0419 18:57:44.812540   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 18:57:44.847553   14960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 18:57:44.884185   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0419 18:57:44.920990   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0419 18:57:44.956049   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0419 18:57:44.994494   14960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 18:57:45.013362   14960 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0419 18:57:45.028271   14960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 18:57:45.060779   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:57:45.273878   14960 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0419 18:57:45.313707   14960 start.go:494] detecting cgroup driver to use...
	I0419 18:57:45.328861   14960 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0419 18:57:45.356028   14960 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0419 18:57:45.356028   14960 command_runner.go:130] > [Unit]
	I0419 18:57:45.356028   14960 command_runner.go:130] > Description=Docker Application Container Engine
	I0419 18:57:45.356028   14960 command_runner.go:130] > Documentation=https://docs.docker.com
	I0419 18:57:45.356028   14960 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0419 18:57:45.356028   14960 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0419 18:57:45.356028   14960 command_runner.go:130] > StartLimitBurst=3
	I0419 18:57:45.356028   14960 command_runner.go:130] > StartLimitIntervalSec=60
	I0419 18:57:45.356028   14960 command_runner.go:130] > [Service]
	I0419 18:57:45.356028   14960 command_runner.go:130] > Type=notify
	I0419 18:57:45.356028   14960 command_runner.go:130] > Restart=on-failure
	I0419 18:57:45.356028   14960 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0419 18:57:45.356028   14960 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0419 18:57:45.356028   14960 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0419 18:57:45.356028   14960 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0419 18:57:45.356028   14960 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0419 18:57:45.356028   14960 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0419 18:57:45.356028   14960 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0419 18:57:45.356028   14960 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0419 18:57:45.356028   14960 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0419 18:57:45.356028   14960 command_runner.go:130] > ExecStart=
	I0419 18:57:45.356028   14960 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0419 18:57:45.356028   14960 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0419 18:57:45.356028   14960 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0419 18:57:45.356568   14960 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0419 18:57:45.356568   14960 command_runner.go:130] > LimitNOFILE=infinity
	I0419 18:57:45.356568   14960 command_runner.go:130] > LimitNPROC=infinity
	I0419 18:57:45.356617   14960 command_runner.go:130] > LimitCORE=infinity
	I0419 18:57:45.356617   14960 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0419 18:57:45.356617   14960 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0419 18:57:45.356617   14960 command_runner.go:130] > TasksMax=infinity
	I0419 18:57:45.356676   14960 command_runner.go:130] > TimeoutStartSec=0
	I0419 18:57:45.356676   14960 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0419 18:57:45.356718   14960 command_runner.go:130] > Delegate=yes
	I0419 18:57:45.356718   14960 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0419 18:57:45.356718   14960 command_runner.go:130] > KillMode=process
	I0419 18:57:45.356718   14960 command_runner.go:130] > [Install]
	I0419 18:57:45.356770   14960 command_runner.go:130] > WantedBy=multi-user.target
	I0419 18:57:45.370652   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 18:57:45.407895   14960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 18:57:45.461873   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 18:57:45.501637   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 18:57:45.544235   14960 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0419 18:57:45.617094   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 18:57:45.647270   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 18:57:45.681764   14960 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0419 18:57:45.696683   14960 ssh_runner.go:195] Run: which cri-dockerd
	I0419 18:57:45.702638   14960 command_runner.go:130] > /usr/bin/cri-dockerd
	I0419 18:57:45.717383   14960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0419 18:57:45.736623   14960 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0419 18:57:45.783753   14960 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0419 18:57:45.987748   14960 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0419 18:57:46.186538   14960 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0419 18:57:46.186538   14960 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0419 18:57:46.235226   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:57:46.452721   14960 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 18:57:49.103384   14960 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6506574s)
	I0419 18:57:49.117767   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0419 18:57:49.156025   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 18:57:49.193133   14960 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0419 18:57:49.391207   14960 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0419 18:57:49.601806   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:57:49.835578   14960 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0419 18:57:49.887214   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 18:57:49.925625   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:57:50.145208   14960 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0419 18:57:50.254781   14960 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0419 18:57:50.267794   14960 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0419 18:57:50.277781   14960 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0419 18:57:50.277781   14960 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0419 18:57:50.277781   14960 command_runner.go:130] > Device: 0,22	Inode: 852         Links: 1
	I0419 18:57:50.277781   14960 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0419 18:57:50.277781   14960 command_runner.go:130] > Access: 2024-04-20 01:57:50.164058530 +0000
	I0419 18:57:50.277781   14960 command_runner.go:130] > Modify: 2024-04-20 01:57:50.164058530 +0000
	I0419 18:57:50.277781   14960 command_runner.go:130] > Change: 2024-04-20 01:57:50.168058647 +0000
	I0419 18:57:50.277781   14960 command_runner.go:130] >  Birth: -
	I0419 18:57:50.277781   14960 start.go:562] Will wait 60s for crictl version
	I0419 18:57:50.293143   14960 ssh_runner.go:195] Run: which crictl
	I0419 18:57:50.299154   14960 command_runner.go:130] > /usr/bin/crictl
	I0419 18:57:50.317417   14960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 18:57:50.381375   14960 command_runner.go:130] > Version:  0.1.0
	I0419 18:57:50.381375   14960 command_runner.go:130] > RuntimeName:  docker
	I0419 18:57:50.381375   14960 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0419 18:57:50.381375   14960 command_runner.go:130] > RuntimeApiVersion:  v1
	I0419 18:57:50.381375   14960 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0419 18:57:50.391146   14960 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 18:57:50.422989   14960 command_runner.go:130] > 26.0.1
	I0419 18:57:50.433014   14960 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 18:57:50.463601   14960 command_runner.go:130] > 26.0.1
	I0419 18:57:50.468601   14960 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0419 18:57:50.468601   14960 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0419 18:57:50.470600   14960 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0419 18:57:50.470600   14960 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0419 18:57:50.470600   14960 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0419 18:57:50.470600   14960 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8c:b9:25 Flags:up|broadcast|multicast|running}
	I0419 18:57:50.478120   14960 ip.go:210] interface addr: fe80::ce04:318e:a1d8:4460/64
	I0419 18:57:50.478120   14960 ip.go:210] interface addr: 172.19.32.1/20
	I0419 18:57:50.492559   14960 ssh_runner.go:195] Run: grep 172.19.32.1	host.minikube.internal$ /etc/hosts
	I0419 18:57:50.499203   14960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.32.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 18:57:50.521465   14960 kubeadm.go:877] updating cluster {Name:multinode-348000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-348000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.42.24 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.32.249 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.37.59 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 18:57:50.521857   14960 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 18:57:50.531639   14960 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0419 18:57:50.555575   14960 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0419 18:57:50.555575   14960 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0419 18:57:50.555575   14960 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0419 18:57:50.555575   14960 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0419 18:57:50.555575   14960 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0419 18:57:50.555575   14960 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0419 18:57:50.555575   14960 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0419 18:57:50.555575   14960 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0419 18:57:50.555575   14960 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 18:57:50.555575   14960 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0419 18:57:50.556622   14960 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0419 18:57:50.556622   14960 docker.go:615] Images already preloaded, skipping extraction
	I0419 18:57:50.565566   14960 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0419 18:57:50.588348   14960 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0419 18:57:50.588348   14960 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0419 18:57:50.588348   14960 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0419 18:57:50.588348   14960 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0419 18:57:50.588348   14960 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0419 18:57:50.588348   14960 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0419 18:57:50.588348   14960 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0419 18:57:50.588348   14960 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0419 18:57:50.588348   14960 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 18:57:50.588348   14960 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0419 18:57:50.589571   14960 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0419 18:57:50.589571   14960 cache_images.go:84] Images are preloaded, skipping loading
	I0419 18:57:50.589571   14960 kubeadm.go:928] updating node { 172.19.42.24 8443 v1.30.0 docker true true} ...
	I0419 18:57:50.589571   14960 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-348000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.42.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-348000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 18:57:50.598565   14960 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0419 18:57:50.635570   14960 command_runner.go:130] > cgroupfs
	I0419 18:57:50.635839   14960 cni.go:84] Creating CNI manager for ""
	I0419 18:57:50.635891   14960 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0419 18:57:50.635891   14960 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 18:57:50.635976   14960 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.42.24 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-348000 NodeName:multinode-348000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.42.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.42.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 18:57:50.636139   14960 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.42.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-348000"
	  kubeletExtraArgs:
	    node-ip: 172.19.42.24
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.42.24"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 18:57:50.648288   14960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 18:57:50.668178   14960 command_runner.go:130] > kubeadm
	I0419 18:57:50.668178   14960 command_runner.go:130] > kubectl
	I0419 18:57:50.668178   14960 command_runner.go:130] > kubelet
	I0419 18:57:50.668178   14960 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 18:57:50.680597   14960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0419 18:57:50.704763   14960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0419 18:57:50.734984   14960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 18:57:50.763652   14960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0419 18:57:50.818971   14960 ssh_runner.go:195] Run: grep 172.19.42.24	control-plane.minikube.internal$ /etc/hosts
	I0419 18:57:50.826259   14960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.42.24	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 18:57:50.863179   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:57:51.072135   14960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 18:57:51.104396   14960 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000 for IP: 172.19.42.24
	I0419 18:57:51.104396   14960 certs.go:194] generating shared ca certs ...
	I0419 18:57:51.104396   14960 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:57:51.105376   14960 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0419 18:57:51.105730   14960 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0419 18:57:51.105855   14960 certs.go:256] generating profile certs ...
	I0419 18:57:51.106832   14960 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\client.key
	I0419 18:57:51.107062   14960 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key.ea55f2d0
	I0419 18:57:51.107237   14960 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt.ea55f2d0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.42.24]
	I0419 18:57:51.254334   14960 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt.ea55f2d0 ...
	I0419 18:57:51.254334   14960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt.ea55f2d0: {Name:mk1834bcf316826ce45dc2ecf9fee6874a5df74d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:57:51.255870   14960 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key.ea55f2d0 ...
	I0419 18:57:51.255870   14960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key.ea55f2d0: {Name:mkf1eabdf644d4b38289b725707f4624e6455a39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:57:51.256924   14960 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt.ea55f2d0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt
	I0419 18:57:51.269731   14960 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key.ea55f2d0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key
	I0419 18:57:51.271801   14960 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\proxy-client.key
	I0419 18:57:51.271801   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 18:57:51.271801   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0419 18:57:51.271801   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 18:57:51.272469   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 18:57:51.272469   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0419 18:57:51.272469   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0419 18:57:51.273093   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0419 18:57:51.273093   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0419 18:57:51.274149   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem (1338 bytes)
	W0419 18:57:51.274667   14960 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416_empty.pem, impossibly tiny 0 bytes
	I0419 18:57:51.274777   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0419 18:57:51.275143   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0419 18:57:51.275411   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0419 18:57:51.275729   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0419 18:57:51.276217   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem (1708 bytes)
	I0419 18:57:51.276518   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem -> /usr/share/ca-certificates/3416.pem
	I0419 18:57:51.276613   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /usr/share/ca-certificates/34162.pem
	I0419 18:57:51.276796   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 18:57:51.278155   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 18:57:51.336844   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 18:57:51.394440   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 18:57:51.447866   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 18:57:51.503720   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0419 18:57:51.554962   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0419 18:57:51.612448   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 18:57:51.662850   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0419 18:57:51.712338   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem --> /usr/share/ca-certificates/3416.pem (1338 bytes)
	I0419 18:57:51.758478   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /usr/share/ca-certificates/34162.pem (1708 bytes)
	I0419 18:57:51.803754   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 18:57:51.849453   14960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0419 18:57:51.897500   14960 ssh_runner.go:195] Run: openssl version
	I0419 18:57:51.905991   14960 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0419 18:57:51.923131   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34162.pem && ln -fs /usr/share/ca-certificates/34162.pem /etc/ssl/certs/34162.pem"
	I0419 18:57:51.963074   14960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34162.pem
	I0419 18:57:51.970371   14960 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 20 00:10 /usr/share/ca-certificates/34162.pem
	I0419 18:57:51.970510   14960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:10 /usr/share/ca-certificates/34162.pem
	I0419 18:57:51.983605   14960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34162.pem
	I0419 18:57:51.992670   14960 command_runner.go:130] > 3ec20f2e
	I0419 18:57:52.007291   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34162.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 18:57:52.049132   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 18:57:52.088494   14960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 18:57:52.098338   14960 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 20 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I0419 18:57:52.098338   14960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 20 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I0419 18:57:52.112147   14960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 18:57:52.125104   14960 command_runner.go:130] > b5213941
	I0419 18:57:52.136377   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 18:57:52.174791   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3416.pem && ln -fs /usr/share/ca-certificates/3416.pem /etc/ssl/certs/3416.pem"
	I0419 18:57:52.207601   14960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3416.pem
	I0419 18:57:52.216690   14960 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 20 00:10 /usr/share/ca-certificates/3416.pem
	I0419 18:57:52.217293   14960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:10 /usr/share/ca-certificates/3416.pem
	I0419 18:57:52.231705   14960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3416.pem
	I0419 18:57:52.241774   14960 command_runner.go:130] > 51391683
	I0419 18:57:52.257361   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3416.pem /etc/ssl/certs/51391683.0"
	I0419 18:57:52.292553   14960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 18:57:52.301612   14960 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 18:57:52.301684   14960 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0419 18:57:52.301684   14960 command_runner.go:130] > Device: 8,1	Inode: 6290258     Links: 1
	I0419 18:57:52.301720   14960 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0419 18:57:52.301720   14960 command_runner.go:130] > Access: 2024-04-20 01:34:55.187593889 +0000
	I0419 18:57:52.301720   14960 command_runner.go:130] > Modify: 2024-04-20 01:34:55.187593889 +0000
	I0419 18:57:52.301720   14960 command_runner.go:130] > Change: 2024-04-20 01:34:55.187593889 +0000
	I0419 18:57:52.301720   14960 command_runner.go:130] >  Birth: 2024-04-20 01:34:55.187593889 +0000
	I0419 18:57:52.320813   14960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0419 18:57:52.330176   14960 command_runner.go:130] > Certificate will not expire
	I0419 18:57:52.343950   14960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0419 18:57:52.354776   14960 command_runner.go:130] > Certificate will not expire
	I0419 18:57:52.366524   14960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0419 18:57:52.377492   14960 command_runner.go:130] > Certificate will not expire
	I0419 18:57:52.389434   14960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0419 18:57:52.399840   14960 command_runner.go:130] > Certificate will not expire
	I0419 18:57:52.413630   14960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0419 18:57:52.423042   14960 command_runner.go:130] > Certificate will not expire
	I0419 18:57:52.436501   14960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0419 18:57:52.449531   14960 command_runner.go:130] > Certificate will not expire
	I0419 18:57:52.450073   14960 kubeadm.go:391] StartCluster: {Name:multinode-348000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-348000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.42.24 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.32.249 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.37.59 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisi
oner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 18:57:52.459380   14960 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0419 18:57:52.497238   14960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0419 18:57:52.518900   14960 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0419 18:57:52.519063   14960 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0419 18:57:52.519063   14960 command_runner.go:130] > /var/lib/minikube/etcd:
	I0419 18:57:52.519063   14960 command_runner.go:130] > member
	W0419 18:57:52.519063   14960 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0419 18:57:52.519063   14960 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0419 18:57:52.519188   14960 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0419 18:57:52.532583   14960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0419 18:57:52.549427   14960 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0419 18:57:52.551533   14960 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-348000" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 18:57:52.552010   14960 kubeconfig.go:62] C:\Users\jenkins.minikube1\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-348000" cluster setting kubeconfig missing "multinode-348000" context setting]
	I0419 18:57:52.552747   14960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:57:52.573432   14960 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 18:57:52.574110   14960 kapi.go:59] client config for multinode-348000: &rest.Config{Host:"https://172.19.42.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-348000/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-348000/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c35620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 18:57:52.575842   14960 cert_rotation.go:137] Starting client certificate rotation controller
	I0419 18:57:52.588431   14960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0419 18:57:52.608371   14960 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0419 18:57:52.608835   14960 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0419 18:57:52.608869   14960 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0419 18:57:52.608869   14960 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0419 18:57:52.608869   14960 command_runner.go:130] >  kind: InitConfiguration
	I0419 18:57:52.608869   14960 command_runner.go:130] >  localAPIEndpoint:
	I0419 18:57:52.608869   14960 command_runner.go:130] > -  advertiseAddress: 172.19.42.231
	I0419 18:57:52.608869   14960 command_runner.go:130] > +  advertiseAddress: 172.19.42.24
	I0419 18:57:52.608869   14960 command_runner.go:130] >    bindPort: 8443
	I0419 18:57:52.608869   14960 command_runner.go:130] >  bootstrapTokens:
	I0419 18:57:52.608869   14960 command_runner.go:130] >    - groups:
	I0419 18:57:52.608869   14960 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0419 18:57:52.608869   14960 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0419 18:57:52.608869   14960 command_runner.go:130] >    name: "multinode-348000"
	I0419 18:57:52.608869   14960 command_runner.go:130] >    kubeletExtraArgs:
	I0419 18:57:52.608988   14960 command_runner.go:130] > -    node-ip: 172.19.42.231
	I0419 18:57:52.608988   14960 command_runner.go:130] > +    node-ip: 172.19.42.24
	I0419 18:57:52.608988   14960 command_runner.go:130] >    taints: []
	I0419 18:57:52.608988   14960 command_runner.go:130] >  ---
	I0419 18:57:52.608988   14960 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0419 18:57:52.609030   14960 command_runner.go:130] >  kind: ClusterConfiguration
	I0419 18:57:52.609030   14960 command_runner.go:130] >  apiServer:
	I0419 18:57:52.609030   14960 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.19.42.231"]
	I0419 18:57:52.609058   14960 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.19.42.24"]
	I0419 18:57:52.609058   14960 command_runner.go:130] >    extraArgs:
	I0419 18:57:52.609058   14960 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0419 18:57:52.609058   14960 command_runner.go:130] >  controllerManager:
	I0419 18:57:52.609058   14960 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.19.42.231
	+  advertiseAddress: 172.19.42.24
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-348000"
	   kubeletExtraArgs:
	-    node-ip: 172.19.42.231
	+    node-ip: 172.19.42.24
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.19.42.231"]
	+  certSANs: ["127.0.0.1", "localhost", "172.19.42.24"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0419 18:57:52.609058   14960 kubeadm.go:1154] stopping kube-system containers ...
	I0419 18:57:52.620488   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0419 18:57:52.648237   14960 command_runner.go:130] > 627b84abf45c
	I0419 18:57:52.648237   14960 command_runner.go:130] > e248c230a4aa
	I0419 18:57:52.648237   14960 command_runner.go:130] > da1d06ec238f
	I0419 18:57:52.648724   14960 command_runner.go:130] > 2dd294415aae
	I0419 18:57:52.648724   14960 command_runner.go:130] > 8a37c65d06fa
	I0419 18:57:52.648724   14960 command_runner.go:130] > a6586791413d
	I0419 18:57:52.648724   14960 command_runner.go:130] > 7935893e9f22
	I0419 18:57:52.648794   14960 command_runner.go:130] > dd9e5fae3950
	I0419 18:57:52.648794   14960 command_runner.go:130] > 9638ddcd5428
	I0419 18:57:52.648794   14960 command_runner.go:130] > 53f6a0049076
	I0419 18:57:52.648898   14960 command_runner.go:130] > 490377504e57
	I0419 18:57:52.648898   14960 command_runner.go:130] > e476774b8f77
	I0419 18:57:52.648898   14960 command_runner.go:130] > 187cb57784f4
	I0419 18:57:52.649016   14960 command_runner.go:130] > 00d48e11227e
	I0419 18:57:52.649016   14960 command_runner.go:130] > 6e420625b84b
	I0419 18:57:52.649081   14960 command_runner.go:130] > e5d733991bf1
	I0419 18:57:52.649915   14960 docker.go:483] Stopping containers: [627b84abf45c e248c230a4aa da1d06ec238f 2dd294415aae 8a37c65d06fa a6586791413d 7935893e9f22 dd9e5fae3950 9638ddcd5428 53f6a0049076 490377504e57 e476774b8f77 187cb57784f4 00d48e11227e 6e420625b84b e5d733991bf1]
	I0419 18:57:52.661411   14960 ssh_runner.go:195] Run: docker stop 627b84abf45c e248c230a4aa da1d06ec238f 2dd294415aae 8a37c65d06fa a6586791413d 7935893e9f22 dd9e5fae3950 9638ddcd5428 53f6a0049076 490377504e57 e476774b8f77 187cb57784f4 00d48e11227e 6e420625b84b e5d733991bf1
	I0419 18:57:52.690386   14960 command_runner.go:130] > 627b84abf45c
	I0419 18:57:52.690386   14960 command_runner.go:130] > e248c230a4aa
	I0419 18:57:52.690531   14960 command_runner.go:130] > da1d06ec238f
	I0419 18:57:52.690531   14960 command_runner.go:130] > 2dd294415aae
	I0419 18:57:52.690531   14960 command_runner.go:130] > 8a37c65d06fa
	I0419 18:57:52.690531   14960 command_runner.go:130] > a6586791413d
	I0419 18:57:52.690531   14960 command_runner.go:130] > 7935893e9f22
	I0419 18:57:52.690531   14960 command_runner.go:130] > dd9e5fae3950
	I0419 18:57:52.690531   14960 command_runner.go:130] > 9638ddcd5428
	I0419 18:57:52.690531   14960 command_runner.go:130] > 53f6a0049076
	I0419 18:57:52.690531   14960 command_runner.go:130] > 490377504e57
	I0419 18:57:52.690531   14960 command_runner.go:130] > e476774b8f77
	I0419 18:57:52.690531   14960 command_runner.go:130] > 187cb57784f4
	I0419 18:57:52.690531   14960 command_runner.go:130] > 00d48e11227e
	I0419 18:57:52.690682   14960 command_runner.go:130] > 6e420625b84b
	I0419 18:57:52.690682   14960 command_runner.go:130] > e5d733991bf1
	I0419 18:57:52.704529   14960 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0419 18:57:52.744496   14960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0419 18:57:52.761994   14960 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0419 18:57:52.762569   14960 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0419 18:57:52.762610   14960 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0419 18:57:52.762610   14960 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 18:57:52.762610   14960 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 18:57:52.762610   14960 kubeadm.go:156] found existing configuration files:
	
	I0419 18:57:52.774154   14960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0419 18:57:52.795097   14960 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 18:57:52.795582   14960 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 18:57:52.809117   14960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0419 18:57:52.839883   14960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0419 18:57:52.857275   14960 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 18:57:52.857642   14960 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 18:57:52.870050   14960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0419 18:57:52.900981   14960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0419 18:57:52.918355   14960 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 18:57:52.918486   14960 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 18:57:52.935043   14960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0419 18:57:52.966152   14960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0419 18:57:52.983924   14960 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 18:57:52.984883   14960 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 18:57:52.999206   14960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0419 18:57:53.033097   14960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0419 18:57:53.057364   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 18:57:53.382718   14960 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0419 18:57:53.382790   14960 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0419 18:57:53.382790   14960 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0419 18:57:53.382848   14960 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0419 18:57:53.382848   14960 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0419 18:57:53.382848   14960 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0419 18:57:53.382885   14960 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0419 18:57:53.382885   14960 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0419 18:57:53.382885   14960 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0419 18:57:53.382885   14960 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0419 18:57:53.382885   14960 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0419 18:57:53.382885   14960 command_runner.go:130] > [certs] Using the existing "sa" key
	I0419 18:57:53.382962   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 18:57:54.536252   14960 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0419 18:57:54.536252   14960 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0419 18:57:54.536252   14960 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0419 18:57:54.536252   14960 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0419 18:57:54.536252   14960 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0419 18:57:54.536252   14960 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0419 18:57:54.536252   14960 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1532878s)
	I0419 18:57:54.536252   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0419 18:57:54.847668   14960 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 18:57:54.847668   14960 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 18:57:54.847668   14960 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0419 18:57:54.847668   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 18:57:54.957881   14960 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0419 18:57:54.957881   14960 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0419 18:57:54.957881   14960 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0419 18:57:54.957881   14960 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0419 18:57:54.957881   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0419 18:57:55.071564   14960 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0419 18:57:55.071719   14960 api_server.go:52] waiting for apiserver process to appear ...
	I0419 18:57:55.089546   14960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 18:57:55.593708   14960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 18:57:56.094223   14960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 18:57:56.596301   14960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 18:57:57.088270   14960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 18:57:57.114657   14960 command_runner.go:130] > 1877
	I0419 18:57:57.114657   14960 api_server.go:72] duration metric: took 2.0430155s to wait for apiserver process to appear ...
	I0419 18:57:57.114657   14960 api_server.go:88] waiting for apiserver healthz status ...
	I0419 18:57:57.114657   14960 api_server.go:253] Checking apiserver healthz at https://172.19.42.24:8443/healthz ...
	I0419 18:58:00.658967   14960 api_server.go:279] https://172.19.42.24:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0419 18:58:00.659264   14960 api_server.go:103] status: https://172.19.42.24:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0419 18:58:00.659264   14960 api_server.go:253] Checking apiserver healthz at https://172.19.42.24:8443/healthz ...
	I0419 18:58:00.752443   14960 api_server.go:279] https://172.19.42.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0419 18:58:00.753143   14960 api_server.go:103] status: https://172.19.42.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0419 18:58:01.128754   14960 api_server.go:253] Checking apiserver healthz at https://172.19.42.24:8443/healthz ...
	I0419 18:58:01.137618   14960 api_server.go:279] https://172.19.42.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0419 18:58:01.137618   14960 api_server.go:103] status: https://172.19.42.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0419 18:58:01.616585   14960 api_server.go:253] Checking apiserver healthz at https://172.19.42.24:8443/healthz ...
	I0419 18:58:01.629910   14960 api_server.go:279] https://172.19.42.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0419 18:58:01.629910   14960 api_server.go:103] status: https://172.19.42.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0419 18:58:02.122150   14960 api_server.go:253] Checking apiserver healthz at https://172.19.42.24:8443/healthz ...
	I0419 18:58:02.128537   14960 api_server.go:279] https://172.19.42.24:8443/healthz returned 200:
	ok
	I0419 18:58:02.129819   14960 round_trippers.go:463] GET https://172.19.42.24:8443/version
	I0419 18:58:02.129819   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:02.129907   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:02.129907   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:02.143374   14960 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0419 18:58:02.143374   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:02.143374   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:02.143374   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:02.143374   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:02.143374   14960 round_trippers.go:580]     Content-Length: 263
	I0419 18:58:02.143374   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:02 GMT
	I0419 18:58:02.143374   14960 round_trippers.go:580]     Audit-Id: 3f375a0a-26a4-44b4-aeca-761f67cd0ec1
	I0419 18:58:02.143374   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:02.143374   14960 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0419 18:58:02.143374   14960 api_server.go:141] control plane version: v1.30.0
	I0419 18:58:02.143374   14960 api_server.go:131] duration metric: took 5.0287063s to wait for apiserver health ...
	I0419 18:58:02.143374   14960 cni.go:84] Creating CNI manager for ""
	I0419 18:58:02.143374   14960 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0419 18:58:02.147369   14960 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0419 18:58:02.164373   14960 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0419 18:58:02.173364   14960 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0419 18:58:02.173432   14960 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0419 18:58:02.173432   14960 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0419 18:58:02.173432   14960 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0419 18:58:02.173493   14960 command_runner.go:130] > Access: 2024-04-20 01:56:28.980814400 +0000
	I0419 18:58:02.173493   14960 command_runner.go:130] > Modify: 2024-04-18 23:25:47.000000000 +0000
	I0419 18:58:02.173526   14960 command_runner.go:130] > Change: 2024-04-20 01:56:17.849000000 +0000
	I0419 18:58:02.173526   14960 command_runner.go:130] >  Birth: -
	I0419 18:58:02.173646   14960 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0419 18:58:02.173683   14960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0419 18:58:02.274816   14960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0419 18:58:03.445994   14960 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0419 18:58:03.446103   14960 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0419 18:58:03.446103   14960 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0419 18:58:03.446163   14960 command_runner.go:130] > daemonset.apps/kindnet configured
	I0419 18:58:03.446163   14960 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.1713448s)
	I0419 18:58:03.446243   14960 system_pods.go:43] waiting for kube-system pods to appear ...
	I0419 18:58:03.446490   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods
	I0419 18:58:03.446490   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.446490   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.446490   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:03.453080   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:58:03.453080   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:03.453080   14960 round_trippers.go:580]     Audit-Id: b5b53c7d-498a-46b7-9bac-9dd8e14fb35a
	I0419 18:58:03.453080   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:03.453080   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:03.453080   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:03.453080   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:03.453080   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:03.454063   14960 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1748"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87662 chars]
	I0419 18:58:03.461090   14960 system_pods.go:59] 12 kube-system pods found
	I0419 18:58:03.461090   14960 system_pods.go:61] "coredns-7db6d8ff4d-7w477" [895ddde9-466d-4abf-b6f4-594847b26c6c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0419 18:58:03.461090   14960 system_pods.go:61] "etcd-multinode-348000" [33702588-cdf3-4577-b18d-18415cca2c25] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0419 18:58:03.461090   14960 system_pods.go:61] "kindnet-mg8qs" [c6e448a2-6f0c-4c7f-aa8b-0d585c84b09e] Running
	I0419 18:58:03.461090   14960 system_pods.go:61] "kindnet-s4fsr" [46c91d5e-edfa-4254-a802-148047caeab5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0419 18:58:03.461090   14960 system_pods.go:61] "kindnet-s98rh" [551f5bde-7c56-4023-ad92-a2d7a122da60] Running
	I0419 18:58:03.461090   14960 system_pods.go:61] "kube-apiserver-multinode-348000" [13adbf1b-6c17-47a9-951d-2481680a47bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0419 18:58:03.461090   14960 system_pods.go:61] "kube-controller-manager-multinode-348000" [299bb088-9795-4452-87a8-5e96bcacedde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0419 18:58:03.461090   14960 system_pods.go:61] "kube-proxy-2jjsq" [f9666ab7-0d1f-4800-b979-6e38fecdc518] Running
	I0419 18:58:03.461090   14960 system_pods.go:61] "kube-proxy-bjv9b" [3e909d14-543a-4734-8c17-7e2b8188553d] Running
	I0419 18:58:03.461090   14960 system_pods.go:61] "kube-proxy-kj76x" [274342c4-c21f-4279-b0ea-743d8e2c1463] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0419 18:58:03.461090   14960 system_pods.go:61] "kube-scheduler-multinode-348000" [000cfafe-a513-4738-9de2-3c25244b72be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0419 18:58:03.461090   14960 system_pods.go:61] "storage-provisioner" [ffa0cfb9-91fb-4d5b-abe7-11992c731b74] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0419 18:58:03.461090   14960 system_pods.go:74] duration metric: took 14.7858ms to wait for pod list to return data ...
	I0419 18:58:03.461090   14960 node_conditions.go:102] verifying NodePressure condition ...
	I0419 18:58:03.461090   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes
	I0419 18:58:03.461090   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.461090   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.461090   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:03.465072   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:03.466031   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:03.466031   14960 round_trippers.go:580]     Audit-Id: b637c23b-59da-459f-8966-62b69ec7f601
	I0419 18:58:03.466082   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:03.466082   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:03.466082   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:03.466082   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:03.466082   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:03.466150   14960 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1748"},"items":[{"metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15626 chars]
	I0419 18:58:03.467491   14960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 18:58:03.467491   14960 node_conditions.go:123] node cpu capacity is 2
	I0419 18:58:03.467491   14960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 18:58:03.467491   14960 node_conditions.go:123] node cpu capacity is 2
	I0419 18:58:03.467491   14960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 18:58:03.467491   14960 node_conditions.go:123] node cpu capacity is 2
	I0419 18:58:03.467491   14960 node_conditions.go:105] duration metric: took 6.4009ms to run NodePressure ...
	I0419 18:58:03.467491   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 18:58:03.944715   14960 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0419 18:58:03.944715   14960 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0419 18:58:03.944861   14960 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0419 18:58:03.945019   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0419 18:58:03.945019   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.945096   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.945096   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:03.951732   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:58:03.951858   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:03.951873   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:03.951873   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:03.951873   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:03.951914   14960 round_trippers.go:580]     Audit-Id: 75cb39a3-db37-4085-a28e-83bda547f8d7
	I0419 18:58:03.951914   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:03.951942   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:03.953513   14960 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1753"},"items":[{"metadata":{"name":"etcd-multinode-348000","namespace":"kube-system","uid":"33702588-cdf3-4577-b18d-18415cca2c25","resourceVersion":"1741","creationTimestamp":"2024-04-20T01:58:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.42.24:2379","kubernetes.io/config.hash":"c0cfa3da6a3913c3e67500f6c3e9d72b","kubernetes.io/config.mirror":"c0cfa3da6a3913c3e67500f6c3e9d72b","kubernetes.io/config.seen":"2024-04-20T01:57:55.099346749Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:58:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 30501 chars]
	I0419 18:58:03.954964   14960 kubeadm.go:733] kubelet initialised
	I0419 18:58:03.954964   14960 kubeadm.go:734] duration metric: took 10.1029ms waiting for restarted kubelet to initialise ...
	I0419 18:58:03.954964   14960 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 18:58:03.955514   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods
	I0419 18:58:03.955514   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.955514   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.955514   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:03.961575   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:58:03.961575   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:03.961575   14960 round_trippers.go:580]     Audit-Id: 56d2709a-6472-464e-83d5-a0ab21fac066
	I0419 18:58:03.961575   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:03.961575   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:03.961575   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:03.961575   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:03.961575   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:03.963572   14960 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1753"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87069 chars]
	I0419 18:58:03.966569   14960 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace to be "Ready" ...
	I0419 18:58:03.967572   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:03.967572   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.967572   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.967572   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:03.970583   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:03.970583   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:03.970583   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:03.970583   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:03.970583   14960 round_trippers.go:580]     Audit-Id: 1dcea75f-fec2-4370-9489-9dddfc1fe8b8
	I0419 18:58:03.970583   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:03.971452   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:03.971452   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:03.971678   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:03.972286   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:03.972349   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.972349   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.972349   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:03.977439   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:03.977439   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:03.977439   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:03.977439   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:03.977439   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:03.977439   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:03.977439   14960 round_trippers.go:580]     Audit-Id: 9ef7cb1b-5484-4d97-b1e4-3dbaeb285a9d
	I0419 18:58:03.977439   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:03.977981   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:03.978175   14960 pod_ready.go:97] node "multinode-348000" hosting pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:03.978175   14960 pod_ready.go:81] duration metric: took 11.6057ms for pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace to be "Ready" ...
	E0419 18:58:03.978175   14960 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348000" hosting pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:03.978175   14960 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:58:03.978175   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-348000
	I0419 18:58:03.978175   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.978175   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:03.978175   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.981953   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:03.982067   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:03.982067   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:03.982067   14960 round_trippers.go:580]     Audit-Id: 80e6f705-7776-439f-9862-5c10226d579d
	I0419 18:58:03.982113   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:03.982113   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:03.982113   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:03.982180   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:03.982370   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-348000","namespace":"kube-system","uid":"33702588-cdf3-4577-b18d-18415cca2c25","resourceVersion":"1741","creationTimestamp":"2024-04-20T01:58:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.42.24:2379","kubernetes.io/config.hash":"c0cfa3da6a3913c3e67500f6c3e9d72b","kubernetes.io/config.mirror":"c0cfa3da6a3913c3e67500f6c3e9d72b","kubernetes.io/config.seen":"2024-04-20T01:57:55.099346749Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:58:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6373 chars]
	I0419 18:58:03.982920   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:03.982920   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.982920   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.982981   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:03.988211   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:03.988211   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:03.988211   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:03.988211   14960 round_trippers.go:580]     Audit-Id: 01f3ca9f-fb59-4d02-84a9-a84e531b5cb4
	I0419 18:58:03.988211   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:03.988211   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:03.988211   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:03.988211   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:03.988211   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:03.989166   14960 pod_ready.go:97] node "multinode-348000" hosting pod "etcd-multinode-348000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:03.989166   14960 pod_ready.go:81] duration metric: took 10.9918ms for pod "etcd-multinode-348000" in "kube-system" namespace to be "Ready" ...
	E0419 18:58:03.989166   14960 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348000" hosting pod "etcd-multinode-348000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:03.989166   14960 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:58:03.989166   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-348000
	I0419 18:58:03.989166   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.989166   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.989166   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:03.992193   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:03.992193   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:03.992193   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:03.992193   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:03.992193   14960 round_trippers.go:580]     Audit-Id: 446ad663-8de3-472b-9060-e16ad714a213
	I0419 18:58:03.992193   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:03.992193   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:03.992193   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:03.993174   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-348000","namespace":"kube-system","uid":"13adbf1b-6c17-47a9-951d-2481680a47bd","resourceVersion":"1739","creationTimestamp":"2024-04-20T01:58:01Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.42.24:8443","kubernetes.io/config.hash":"af7a3c9321ace7e2a933260472b90113","kubernetes.io/config.mirror":"af7a3c9321ace7e2a933260472b90113","kubernetes.io/config.seen":"2024-04-20T01:57:55.026086199Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:58:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7929 chars]
	I0419 18:58:03.993174   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:03.993174   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.993174   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.993174   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:03.996177   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:03.996177   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:03.996177   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:03.996177   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:03.996177   14960 round_trippers.go:580]     Audit-Id: 57f68f78-761f-44e0-9b69-55a5c52e7e07
	I0419 18:58:03.996177   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:03.996177   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:03.996177   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:03.997177   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:03.997177   14960 pod_ready.go:97] node "multinode-348000" hosting pod "kube-apiserver-multinode-348000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:03.997177   14960 pod_ready.go:81] duration metric: took 8.0103ms for pod "kube-apiserver-multinode-348000" in "kube-system" namespace to be "Ready" ...
	E0419 18:58:03.997177   14960 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348000" hosting pod "kube-apiserver-multinode-348000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:03.997177   14960 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:58:03.997177   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-348000
	I0419 18:58:03.997177   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.997177   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.997177   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:04.000182   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:04.000182   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:04.000182   14960 round_trippers.go:580]     Audit-Id: 4a6cf02f-7c9b-480a-a20e-aa1f822c2655
	I0419 18:58:04.000182   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:04.000182   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:04.000182   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:04.000182   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:04.000182   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:04.001183   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-348000","namespace":"kube-system","uid":"299bb088-9795-4452-87a8-5e96bcacedde","resourceVersion":"1738","creationTimestamp":"2024-04-20T01:35:08Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"30aa2729d0c65b9f89e1ae2d151edd9b","kubernetes.io/config.mirror":"30aa2729d0c65b9f89e1ae2d151edd9b","kubernetes.io/config.seen":"2024-04-20T01:35:08.321898260Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7727 chars]
	I0419 18:58:04.001183   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:04.001183   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:04.001183   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:04.001183   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:04.004187   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:04.004187   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:04.004187   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:04.004187   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:04.004187   14960 round_trippers.go:580]     Audit-Id: 4ac8285d-40e6-4016-8a1e-83d3ea5ad269
	I0419 18:58:04.004187   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:04.004187   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:04.004187   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:04.004187   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:04.005251   14960 pod_ready.go:97] node "multinode-348000" hosting pod "kube-controller-manager-multinode-348000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:04.005251   14960 pod_ready.go:81] duration metric: took 8.0746ms for pod "kube-controller-manager-multinode-348000" in "kube-system" namespace to be "Ready" ...
	E0419 18:58:04.005251   14960 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348000" hosting pod "kube-controller-manager-multinode-348000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:04.005251   14960 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2jjsq" in "kube-system" namespace to be "Ready" ...
	I0419 18:58:04.157508   14960 request.go:629] Waited for 152.025ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2jjsq
	I0419 18:58:04.157750   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2jjsq
	I0419 18:58:04.157847   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:04.157885   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:04.157885   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:04.161740   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:04.161740   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:04.161740   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:04.161740   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:04.161740   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:04.161740   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:04.161740   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:04 GMT
	I0419 18:58:04.161740   14960 round_trippers.go:580]     Audit-Id: 6d22071c-5fdf-4004-b73c-2dede9ef23cc
	I0419 18:58:04.162270   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2jjsq","generateName":"kube-proxy-","namespace":"kube-system","uid":"f9666ab7-0d1f-4800-b979-6e38fecdc518","resourceVersion":"1708","creationTimestamp":"2024-04-20T01:42:52Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:42:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0419 18:58:04.346096   14960 request.go:629] Waited for 183.6347ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m03
	I0419 18:58:04.346396   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m03
	I0419 18:58:04.346396   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:04.346396   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:04.346396   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:04.349148   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:04.349148   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:04.349148   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:04.349148   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:04.349148   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:04.349148   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:04 GMT
	I0419 18:58:04.349148   14960 round_trippers.go:580]     Audit-Id: f0d1db07-cdf3-4770-8c2a-ab980582dd97
	I0419 18:58:04.349148   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:04.350261   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m03","uid":"08bfca2d-b382-4052-a5b6-0a78bee7caef","resourceVersion":"1716","creationTimestamp":"2024-04-20T01:53:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_53_29_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:53:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4398 chars]
	I0419 18:58:04.350383   14960 pod_ready.go:97] node "multinode-348000-m03" hosting pod "kube-proxy-2jjsq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000-m03" has status "Ready":"Unknown"
	I0419 18:58:04.350383   14960 pod_ready.go:81] duration metric: took 345.131ms for pod "kube-proxy-2jjsq" in "kube-system" namespace to be "Ready" ...
	E0419 18:58:04.350383   14960 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348000-m03" hosting pod "kube-proxy-2jjsq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000-m03" has status "Ready":"Unknown"
	I0419 18:58:04.350383   14960 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bjv9b" in "kube-system" namespace to be "Ready" ...
	I0419 18:58:04.548899   14960 request.go:629] Waited for 197.702ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bjv9b
	I0419 18:58:04.549179   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bjv9b
	I0419 18:58:04.549179   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:04.549179   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:04.549179   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:04.554692   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:04.554752   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:04.554752   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:04.554752   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:04.554752   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:04.554752   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:04 GMT
	I0419 18:58:04.554752   14960 round_trippers.go:580]     Audit-Id: f4192125-d973-425f-aa85-2c5ce20d2b95
	I0419 18:58:04.554817   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:04.554817   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bjv9b","generateName":"kube-proxy-","namespace":"kube-system","uid":"3e909d14-543a-4734-8c17-7e2b8188553d","resourceVersion":"601","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5836 chars]
	I0419 18:58:04.752452   14960 request.go:629] Waited for 196.4962ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:58:04.752452   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:58:04.752590   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:04.752590   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:04.752590   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:04.760087   14960 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 18:58:04.760161   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:04.760201   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:04.760201   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:04.760201   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:04 GMT
	I0419 18:58:04.760201   14960 round_trippers.go:580]     Audit-Id: 6c52c0de-d8cf-4947-bbfc-7230f03415ff
	I0419 18:58:04.760201   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:04.760241   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:04.760561   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"1672","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3826 chars]
	I0419 18:58:04.761342   14960 pod_ready.go:92] pod "kube-proxy-bjv9b" in "kube-system" namespace has status "Ready":"True"
	I0419 18:58:04.761367   14960 pod_ready.go:81] duration metric: took 410.9835ms for pod "kube-proxy-bjv9b" in "kube-system" namespace to be "Ready" ...
	I0419 18:58:04.761425   14960 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kj76x" in "kube-system" namespace to be "Ready" ...
	I0419 18:58:04.954605   14960 request.go:629] Waited for 192.8908ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kj76x
	I0419 18:58:04.954827   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kj76x
	I0419 18:58:04.954827   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:04.954827   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:04.954827   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:04.960479   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:04.960566   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:04.960566   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:04.960566   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:04 GMT
	I0419 18:58:04.960566   14960 round_trippers.go:580]     Audit-Id: b20d923f-8d8d-40b4-8e8b-d07f98d5f39f
	I0419 18:58:04.960566   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:04.960643   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:04.960643   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:04.960754   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kj76x","generateName":"kube-proxy-","namespace":"kube-system","uid":"274342c4-c21f-4279-b0ea-743d8e2c1463","resourceVersion":"1750","creationTimestamp":"2024-04-20T01:35:22Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0419 18:58:05.158285   14960 request.go:629] Waited for 196.5957ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:05.158285   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:05.158285   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:05.158285   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:05.158285   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:05.161856   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:05.161856   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:05.161856   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:05.161856   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:05 GMT
	I0419 18:58:05.161856   14960 round_trippers.go:580]     Audit-Id: c92c9891-ee0b-4fc1-a733-5bed4130decf
	I0419 18:58:05.161856   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:05.161856   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:05.161856   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:05.162693   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:05.162886   14960 pod_ready.go:97] node "multinode-348000" hosting pod "kube-proxy-kj76x" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:05.162886   14960 pod_ready.go:81] duration metric: took 401.4601ms for pod "kube-proxy-kj76x" in "kube-system" namespace to be "Ready" ...
	E0419 18:58:05.162886   14960 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348000" hosting pod "kube-proxy-kj76x" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:05.162886   14960 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:58:05.347301   14960 request.go:629] Waited for 184.415ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348000
	I0419 18:58:05.347575   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348000
	I0419 18:58:05.347637   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:05.347637   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:05.347637   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:05.351227   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:05.351844   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:05.351844   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:05 GMT
	I0419 18:58:05.351844   14960 round_trippers.go:580]     Audit-Id: ed475d8a-c6b2-41a8-8400-fe09cbd6b310
	I0419 18:58:05.351844   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:05.351844   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:05.351926   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:05.351926   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:05.352241   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-348000","namespace":"kube-system","uid":"000cfafe-a513-4738-9de2-3c25244b72be","resourceVersion":"1737","creationTimestamp":"2024-04-20T01:35:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"92813b2aed63b63058d3fd06709fa24e","kubernetes.io/config.mirror":"92813b2aed63b63058d3fd06709fa24e","kubernetes.io/config.seen":"2024-04-20T01:35:08.321899460Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5439 chars]
	I0419 18:58:05.550879   14960 request.go:629] Waited for 198.04ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:05.551253   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:05.551322   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:05.551343   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:05.551343   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:05.556254   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:05.556254   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:05.556254   14960 round_trippers.go:580]     Audit-Id: dd6509d5-df5f-4a04-b3f6-3af2738c486b
	I0419 18:58:05.556254   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:05.556254   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:05.557129   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:05.557129   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:05.557129   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:05 GMT
	I0419 18:58:05.557533   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:05.558032   14960 pod_ready.go:97] node "multinode-348000" hosting pod "kube-scheduler-multinode-348000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:05.558162   14960 pod_ready.go:81] duration metric: took 395.2748ms for pod "kube-scheduler-multinode-348000" in "kube-system" namespace to be "Ready" ...
	E0419 18:58:05.558162   14960 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348000" hosting pod "kube-scheduler-multinode-348000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:05.558232   14960 pod_ready.go:38] duration metric: took 1.6031944s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 18:58:05.558232   14960 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0419 18:58:05.581016   14960 command_runner.go:130] > -16
	I0419 18:58:05.581016   14960 ops.go:34] apiserver oom_adj: -16
	I0419 18:58:05.581095   14960 kubeadm.go:591] duration metric: took 13.0618794s to restartPrimaryControlPlane
	I0419 18:58:05.581095   14960 kubeadm.go:393] duration metric: took 13.1309941s to StartCluster
	I0419 18:58:05.581163   14960 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:58:05.581326   14960 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 18:58:05.583108   14960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:58:05.584706   14960 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.42.24 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 18:58:05.584706   14960 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0419 18:58:05.590659   14960 out.go:177] * Verifying Kubernetes components...
	I0419 18:58:05.585386   14960 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 18:58:05.595154   14960 out.go:177] * Enabled addons: 
	I0419 18:58:05.599861   14960 addons.go:505] duration metric: took 15.1552ms for enable addons: enabled=[]
	I0419 18:58:05.611465   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:58:05.940894   14960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 18:58:05.977364   14960 node_ready.go:35] waiting up to 6m0s for node "multinode-348000" to be "Ready" ...
	I0419 18:58:05.977364   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:05.977364   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:05.977364   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:05.977364   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:05.980929   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:05.980929   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:05.980929   14960 round_trippers.go:580]     Audit-Id: 5b823e72-37e3-4749-8e3c-817044127e8b
	I0419 18:58:05.980929   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:05.980929   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:05.980929   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:05.980929   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:05.981558   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:05 GMT
	I0419 18:58:05.981779   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:06.493376   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:06.493376   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:06.493501   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:06.493501   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:06.497243   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:06.497243   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:06.497243   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:06.498029   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:06.498029   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:06.498029   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:06 GMT
	I0419 18:58:06.498029   14960 round_trippers.go:580]     Audit-Id: dc04da86-0711-4288-ab53-bafaf5cafc85
	I0419 18:58:06.498029   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:06.498446   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:06.988833   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:06.988952   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:06.988952   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:06.988952   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:06.999126   14960 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0419 18:58:06.999126   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:06.999126   14960 round_trippers.go:580]     Audit-Id: a9e144a2-0f2b-47e4-ac7e-6908a1386f24
	I0419 18:58:06.999126   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:06.999126   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:06.999126   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:06.999126   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:06.999126   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:06 GMT
	I0419 18:58:06.999946   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:07.489616   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:07.489616   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:07.489616   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:07.489616   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:07.494221   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:07.494221   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:07.494474   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:07.494474   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:07.494474   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:07 GMT
	I0419 18:58:07.494474   14960 round_trippers.go:580]     Audit-Id: 9e82518f-ed96-499e-959c-993a7581e1bd
	I0419 18:58:07.494474   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:07.494474   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:07.494703   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:07.993160   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:07.993160   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:07.993160   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:07.993160   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:07.996761   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:07.997823   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:07.997823   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:07.997823   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:07.997823   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:07 GMT
	I0419 18:58:07.997939   14960 round_trippers.go:580]     Audit-Id: 7b222396-a5a7-4761-8afa-58e024e1d86e
	I0419 18:58:07.997939   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:07.997939   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:07.998141   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:07.998540   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:08.491774   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:08.491774   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:08.491774   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:08.491774   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:08.496678   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:08.496678   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:08.496678   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:08.496678   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:08.496678   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:08.496678   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:08 GMT
	I0419 18:58:08.496945   14960 round_trippers.go:580]     Audit-Id: d7f073fc-ebc5-4a53-864b-88298ab470ce
	I0419 18:58:08.496945   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:08.497032   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:08.990889   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:08.990889   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:08.990889   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:08.990889   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:08.995433   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:08.995433   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:08.995433   14960 round_trippers.go:580]     Audit-Id: 62277be1-7b3b-497f-9691-0be4e0d6903b
	I0419 18:58:08.995433   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:08.995520   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:08.995520   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:08.995520   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:08.995520   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:08 GMT
	I0419 18:58:08.995718   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:09.487311   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:09.487502   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:09.487502   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:09.487502   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:09.492797   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:09.492797   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:09.492797   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:09.492797   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:09.492797   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:09 GMT
	I0419 18:58:09.492797   14960 round_trippers.go:580]     Audit-Id: 0ec245d4-c9ed-4dd1-b22d-c14e3eed2e8e
	I0419 18:58:09.492797   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:09.492797   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:09.492797   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:09.986137   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:09.986215   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:09.986215   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:09.986215   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:09.989601   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:09.989601   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:09.989601   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:09 GMT
	I0419 18:58:09.989601   14960 round_trippers.go:580]     Audit-Id: 198f07e3-9c4a-4bc0-a94a-a301104e275a
	I0419 18:58:09.989601   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:09.989601   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:09.989601   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:09.989601   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:09.990517   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:10.483090   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:10.483166   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:10.483166   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:10.483166   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:10.487627   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:10.487843   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:10.487843   14960 round_trippers.go:580]     Audit-Id: 7f0144b9-0b59-4814-98e4-04748ec905a7
	I0419 18:58:10.487843   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:10.487843   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:10.487843   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:10.487843   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:10.487843   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:10 GMT
	I0419 18:58:10.488211   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:10.488591   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:10.981905   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:10.981905   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:10.981905   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:10.981905   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:10.987221   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:10.987221   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:10.987221   14960 round_trippers.go:580]     Audit-Id: a21010df-4f10-4e74-b424-3c97f295e0c9
	I0419 18:58:10.987221   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:10.987221   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:10.987221   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:10.987221   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:10.987221   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:10 GMT
	I0419 18:58:10.987221   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:11.482219   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:11.482423   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:11.482423   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:11.482423   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:11.486379   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:11.486379   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:11.486379   14960 round_trippers.go:580]     Audit-Id: 3a82bfc8-f5b2-4c8e-aba6-d1190ccfe77f
	I0419 18:58:11.486379   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:11.486814   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:11.486814   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:11.486814   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:11.486863   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:11 GMT
	I0419 18:58:11.487131   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:11.984375   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:11.984375   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:11.984616   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:11.984616   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:11.989438   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:11.989438   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:11.989438   14960 round_trippers.go:580]     Audit-Id: 6c34e4b9-5cf7-4238-97ee-47f52fbcb9df
	I0419 18:58:11.989512   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:11.989512   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:11.989512   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:11.989512   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:11.989512   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:11 GMT
	I0419 18:58:11.989622   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:12.483365   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:12.483440   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:12.483440   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:12.483440   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:12.487812   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:12.487914   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:12.487914   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:12.487914   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:12.487914   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:12.487914   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:12.487914   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:12 GMT
	I0419 18:58:12.487914   14960 round_trippers.go:580]     Audit-Id: b0cb6da3-1724-4c6a-86da-60aca41f3b7a
	I0419 18:58:12.488217   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:12.488873   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:12.985769   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:12.985769   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:12.985769   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:12.985769   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:12.989854   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:12.989854   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:12.989854   14960 round_trippers.go:580]     Audit-Id: 596287ac-2ce3-4d07-a8cd-25176a9e90b3
	I0419 18:58:12.989854   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:12.989979   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:12.989979   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:12.989979   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:12.989979   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:12 GMT
	I0419 18:58:12.990099   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:13.491899   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:13.491958   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:13.491958   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:13.491958   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:13.496840   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:13.496840   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:13.496840   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:13 GMT
	I0419 18:58:13.496840   14960 round_trippers.go:580]     Audit-Id: a9be612e-f745-4164-8b9b-a485ab202080
	I0419 18:58:13.496840   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:13.496840   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:13.496840   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:13.496840   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:13.497232   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:13.992146   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:13.992194   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:13.992194   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:13.992194   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:13.996852   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:13.996852   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:13.996852   14960 round_trippers.go:580]     Audit-Id: 698f39fd-db9e-490d-9655-093ee63efa8c
	I0419 18:58:13.996852   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:13.996852   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:13.996976   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:13.996976   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:13.996976   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:13 GMT
	I0419 18:58:13.997315   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:14.489979   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:14.490085   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:14.490085   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:14.490085   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:14.497911   14960 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 18:58:14.497911   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:14.497911   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:14.497911   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:14.497911   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:14.498111   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:14 GMT
	I0419 18:58:14.498111   14960 round_trippers.go:580]     Audit-Id: c96ca703-b5cc-483e-99fd-9b542ee5fc5d
	I0419 18:58:14.498111   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:14.498190   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:14.498858   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:14.992458   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:14.992458   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:14.992458   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:14.992458   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:14.996967   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:14.996967   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:14.997046   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:14.997069   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:14.997069   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:14.997069   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:14 GMT
	I0419 18:58:14.997069   14960 round_trippers.go:580]     Audit-Id: e74cccc8-e5fd-474f-91c6-31742f8ef8e7
	I0419 18:58:14.997069   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:14.997261   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:15.481566   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:15.481903   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:15.481984   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:15.481984   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:15.485636   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:15.485636   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:15.485636   14960 round_trippers.go:580]     Audit-Id: 385d8f41-1bac-4a6e-859d-db69eb2127e6
	I0419 18:58:15.485636   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:15.486094   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:15.486094   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:15.486094   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:15.486094   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:15 GMT
	I0419 18:58:15.486175   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:15.978206   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:15.978295   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:15.978357   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:15.978357   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:15.982784   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:15.982784   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:15.982984   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:15 GMT
	I0419 18:58:15.982984   14960 round_trippers.go:580]     Audit-Id: 6b9e2378-61f9-4b9d-a565-32ec9d4be0ef
	I0419 18:58:15.982984   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:15.982984   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:15.982984   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:15.982984   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:15.983496   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:16.480408   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:16.480408   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:16.480408   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:16.480408   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:16.483674   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:16.483674   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:16.483674   14960 round_trippers.go:580]     Audit-Id: 5c9a618b-6387-431b-a73e-20d5f3a6eff9
	I0419 18:58:16.484597   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:16.484597   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:16.484597   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:16.484597   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:16.484597   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:16 GMT
	I0419 18:58:16.484930   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:16.984638   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:16.984638   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:16.984767   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:16.984767   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:16.994928   14960 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0419 18:58:16.995717   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:16.995717   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:16.995717   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:16.995717   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:16.995717   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:16.995788   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:16 GMT
	I0419 18:58:16.995788   14960 round_trippers.go:580]     Audit-Id: c7dc3c38-79fa-429e-9797-29632803151b
	I0419 18:58:16.996087   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:16.996438   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:17.483287   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:17.483287   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:17.483287   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:17.483287   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:17.486940   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:17.486940   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:17.486940   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:17 GMT
	I0419 18:58:17.486940   14960 round_trippers.go:580]     Audit-Id: b877e136-6c02-4ac7-963d-4ab9cc1dab52
	I0419 18:58:17.486940   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:17.487943   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:17.487943   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:17.487977   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:17.488298   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:17.983477   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:17.983560   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:17.983560   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:17.983560   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:17.990789   14960 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 18:58:17.990789   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:17.990789   14960 round_trippers.go:580]     Audit-Id: d3b51334-e138-4546-98aa-fabc11237f10
	I0419 18:58:17.990789   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:17.990789   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:17.990789   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:17.990789   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:17.990789   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:17 GMT
	I0419 18:58:17.990789   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:18.480135   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:18.480135   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:18.480135   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:18.480135   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:18.484237   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:18.484237   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:18.484314   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:18.484415   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:18.484415   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:18 GMT
	I0419 18:58:18.484415   14960 round_trippers.go:580]     Audit-Id: 5a88e0f4-805e-4b41-bada-445c0481d452
	I0419 18:58:18.484415   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:18.484500   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:18.484705   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:18.989029   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:18.989029   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:18.989103   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:18.989103   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:18.992479   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:18.992479   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:18.992479   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:18.992479   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:18.992479   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:18 GMT
	I0419 18:58:18.992479   14960 round_trippers.go:580]     Audit-Id: 51f8bd66-080e-439c-884d-ea12bc0123b1
	I0419 18:58:18.993444   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:18.993444   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:18.993606   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:19.486313   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:19.486313   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:19.486313   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:19.486313   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:19.491910   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:19.492948   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:19.492948   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:19.492948   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:19.492948   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:19.492948   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:19.492948   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:19 GMT
	I0419 18:58:19.492948   14960 round_trippers.go:580]     Audit-Id: 394534e3-c860-4482-9825-50c3edd558ee
	I0419 18:58:19.493195   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:19.493787   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:19.985432   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:19.985432   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:19.985432   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:19.985432   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:19.990055   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:19.990055   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:19.990055   14960 round_trippers.go:580]     Audit-Id: c5a00eff-7f19-472b-aa46-2ec59e6653d7
	I0419 18:58:19.990055   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:19.990055   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:19.990055   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:19.990055   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:19.990055   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:19 GMT
	I0419 18:58:19.990514   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:20.484249   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:20.484249   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:20.484249   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:20.484249   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:20.489151   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:20.489151   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:20.489151   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:20.489151   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:20.489151   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:20 GMT
	I0419 18:58:20.489151   14960 round_trippers.go:580]     Audit-Id: ecf59d52-fee9-4c61-89cb-59ff2e124630
	I0419 18:58:20.489363   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:20.489363   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:20.489686   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:20.987089   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:20.987089   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:20.987089   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:20.987089   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:20.991601   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:20.991601   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:20.991601   14960 round_trippers.go:580]     Audit-Id: 91378339-774f-40c2-99fc-3a9c160db851
	I0419 18:58:20.991601   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:20.991707   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:20.991707   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:20.991707   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:20.991707   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:20 GMT
	I0419 18:58:20.991839   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:21.487012   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:21.487251   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:21.487251   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:21.487251   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:21.491637   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:21.491696   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:21.491696   14960 round_trippers.go:580]     Audit-Id: 174ae4b8-573e-46a3-85e1-b5133f6aefe6
	I0419 18:58:21.491696   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:21.491696   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:21.491696   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:21.491696   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:21.491771   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:21 GMT
	I0419 18:58:21.492122   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:21.986202   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:21.986269   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:21.986269   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:21.986269   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:21.990059   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:21.990364   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:21.990430   14960 round_trippers.go:580]     Audit-Id: 355ca03f-5bfe-4dcc-a187-fa7ba5ccc8ee
	I0419 18:58:21.990430   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:21.990430   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:21.990430   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:21.990430   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:21.990430   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:21 GMT
	I0419 18:58:21.991726   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:21.991992   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:22.484941   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:22.484941   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:22.484941   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:22.484941   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:22.488937   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:22.489385   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:22.489472   14960 round_trippers.go:580]     Audit-Id: 8a88ec10-1a6c-41fb-b998-329bf8c60ca5
	I0419 18:58:22.489472   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:22.489472   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:22.489472   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:22.489472   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:22.489472   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:22 GMT
	I0419 18:58:22.489472   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:22.985695   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:22.985939   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:22.985939   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:22.985939   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:22.990270   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:22.990495   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:22.990495   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:22.990495   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:22.990495   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:22.990495   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:22.990652   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:22 GMT
	I0419 18:58:22.990735   14960 round_trippers.go:580]     Audit-Id: 904756ba-e5c0-4ab2-8c78-e62ab91e1b24
	I0419 18:58:22.991032   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:23.488472   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:23.488646   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:23.488646   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:23.488720   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:23.492523   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:23.492523   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:23.492523   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:23 GMT
	I0419 18:58:23.492523   14960 round_trippers.go:580]     Audit-Id: 5d7a24d2-395b-497c-9461-24a435416f57
	I0419 18:58:23.493438   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:23.493438   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:23.493438   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:23.493511   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:23.493955   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:23.989990   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:23.989990   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:23.989990   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:23.989990   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:23.996348   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:58:23.996348   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:23.996348   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:23.996731   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:23 GMT
	I0419 18:58:23.996731   14960 round_trippers.go:580]     Audit-Id: b7ca2701-8de8-4a09-b367-ab2626abd839
	I0419 18:58:23.996731   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:23.996731   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:23.996731   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:23.996897   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:23.997428   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:24.491745   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:24.491913   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:24.491913   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:24.492006   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:24.497994   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:24.497994   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:24.497994   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:24.497994   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:24 GMT
	I0419 18:58:24.498541   14960 round_trippers.go:580]     Audit-Id: 590b8801-1035-431c-b3db-2b5bedccac75
	I0419 18:58:24.498541   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:24.498541   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:24.498541   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:24.498741   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:24.988768   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:24.988768   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:24.988768   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:24.988768   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:24.992625   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:24.992625   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:24.992625   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:24.992625   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:24.992740   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:24.992740   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:24 GMT
	I0419 18:58:24.992740   14960 round_trippers.go:580]     Audit-Id: 40de5aeb-49e7-4cec-b1d1-3226c37e5be3
	I0419 18:58:24.992740   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:24.992973   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:25.491781   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:25.491984   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:25.492060   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:25.492060   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:25.495401   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:25.495401   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:25.495594   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:25.495594   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:25.495594   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:25.495594   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:25 GMT
	I0419 18:58:25.495594   14960 round_trippers.go:580]     Audit-Id: 0dac004f-567e-472a-b39d-e812c05dcc14
	I0419 18:58:25.495594   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:25.495925   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:25.992044   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:25.992319   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:25.992319   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:25.992319   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:25.996781   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:25.996781   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:25.996781   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:25.996781   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:25.996781   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:25.996781   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:25 GMT
	I0419 18:58:25.996781   14960 round_trippers.go:580]     Audit-Id: ab791579-2850-4a4e-ad33-3ddc721d9eaf
	I0419 18:58:25.996781   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:25.996781   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:26.492753   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:26.492753   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:26.492885   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:26.492885   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:26.497033   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:26.497092   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:26.497092   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:26.497092   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:26.497157   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:26.497157   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:26 GMT
	I0419 18:58:26.497157   14960 round_trippers.go:580]     Audit-Id: cf7b4c78-6f19-4646-b59b-e83f89eac2d9
	I0419 18:58:26.497157   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:26.497251   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:26.498131   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:26.979727   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:26.979727   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:26.979727   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:26.979727   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:26.982294   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:26.983326   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:26.983326   14960 round_trippers.go:580]     Audit-Id: 7e70477f-f172-4bc2-be7f-39abd031b1e8
	I0419 18:58:26.983326   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:26.983326   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:26.983326   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:26.983326   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:26.983326   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:26 GMT
	I0419 18:58:26.983528   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:27.481485   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:27.481485   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:27.482026   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:27.482026   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:27.485468   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:27.485468   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:27.486417   14960 round_trippers.go:580]     Audit-Id: a8e51a40-fb74-4ff7-97e0-44d54f09be54
	I0419 18:58:27.486621   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:27.486621   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:27.486621   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:27.486621   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:27.486621   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:27 GMT
	I0419 18:58:27.486862   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:27.981256   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:27.981256   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:27.981256   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:27.981256   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:27.985410   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:27.985837   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:27.985837   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:27 GMT
	I0419 18:58:27.985837   14960 round_trippers.go:580]     Audit-Id: 2e1e06bc-1c28-44fc-950f-a4500c753538
	I0419 18:58:27.985837   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:27.985837   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:27.985837   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:27.985837   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:27.986173   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:28.481822   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:28.481822   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:28.482063   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:28.482063   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:28.486243   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:28.486243   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:28.486243   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:28 GMT
	I0419 18:58:28.486243   14960 round_trippers.go:580]     Audit-Id: d235a4c3-9ce5-4a4a-9321-6bc4bc6eaf50
	I0419 18:58:28.486865   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:28.486865   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:28.486865   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:28.486865   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:28.487214   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:28.983975   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:28.983975   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:28.984102   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:28.984102   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:28.988037   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:28.988037   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:28.988037   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:28 GMT
	I0419 18:58:28.988500   14960 round_trippers.go:580]     Audit-Id: 52520614-da2c-4c7c-9a51-7e7bc6328c02
	I0419 18:58:28.988500   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:28.988500   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:28.988500   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:28.988500   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:28.988738   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:28.989684   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:29.484695   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:29.484695   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:29.484695   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:29.484695   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:29.489433   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:29.489433   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:29.489433   14960 round_trippers.go:580]     Audit-Id: 1bb7751c-3178-4dfa-99e2-d69d98abec80
	I0419 18:58:29.489433   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:29.489433   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:29.489433   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:29.490241   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:29.490241   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:29 GMT
	I0419 18:58:29.490936   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:29.981921   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:29.981992   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:29.982018   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:29.982018   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:29.986023   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:29.986023   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:29.986023   14960 round_trippers.go:580]     Audit-Id: e3d4a60a-a553-4544-975b-96b45c101e85
	I0419 18:58:29.986023   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:29.986621   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:29.986621   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:29.986621   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:29.986621   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:29 GMT
	I0419 18:58:29.986621   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:30.480661   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:30.480772   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:30.480772   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:30.480772   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:30.485136   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:30.485234   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:30.485234   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:30.485234   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:30.485234   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:30 GMT
	I0419 18:58:30.485234   14960 round_trippers.go:580]     Audit-Id: 1a1ee964-b3b5-42fa-bb7b-aaaee26dffbc
	I0419 18:58:30.485234   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:30.485234   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:30.485633   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:30.979752   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:30.979852   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:30.979852   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:30.979852   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:30.984171   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:30.984171   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:30.984434   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:30.984434   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:30.984434   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:30 GMT
	I0419 18:58:30.984434   14960 round_trippers.go:580]     Audit-Id: 2c683212-bcf8-4ec1-b437-3e6484c70512
	I0419 18:58:30.984434   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:30.984434   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:30.984648   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:31.479048   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:31.479341   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:31.479341   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:31.479341   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:31.484054   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:31.484054   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:31.484153   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:31.484153   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:31 GMT
	I0419 18:58:31.484153   14960 round_trippers.go:580]     Audit-Id: 9b6503df-3cdc-41a8-b77d-1243dbfe99ed
	I0419 18:58:31.484153   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:31.484153   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:31.484153   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:31.484351   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:31.485282   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:31.979100   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:31.979186   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:31.979186   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:31.979186   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:31.983861   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:31.983959   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:31.983959   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:31 GMT
	I0419 18:58:31.983959   14960 round_trippers.go:580]     Audit-Id: f4b80f4c-27ba-4c8e-951a-ec6c211ea215
	I0419 18:58:31.983959   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:31.983959   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:31.984049   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:31.984049   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:31.984082   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:32.492476   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:32.492538   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:32.492538   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:32.492538   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:32.496203   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:32.496203   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:32.496203   14960 round_trippers.go:580]     Audit-Id: 9106f3b5-f4a9-4013-a952-2a4cbcd86b91
	I0419 18:58:32.496203   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:32.496203   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:32.497208   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:32.497208   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:32.497208   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:32 GMT
	I0419 18:58:32.497626   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:32.991994   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:32.992203   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:32.992203   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:32.992203   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:32.997720   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:32.997815   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:32.997815   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:32 GMT
	I0419 18:58:32.997815   14960 round_trippers.go:580]     Audit-Id: e1be733d-d529-4b15-a3b3-db872c0af358
	I0419 18:58:32.997815   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:32.997815   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:32.997815   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:32.997815   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:32.998041   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:33.479218   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:33.479218   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:33.479218   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:33.479218   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:33.484075   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:33.484304   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:33.484304   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:33.484304   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:33.484304   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:33.484304   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:33 GMT
	I0419 18:58:33.484304   14960 round_trippers.go:580]     Audit-Id: 60c35584-2256-45dc-9f66-ac614b0d23f2
	I0419 18:58:33.484304   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:33.484726   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:33.992946   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:33.993031   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:33.993031   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:33.993031   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:33.997675   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:33.998044   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:33.998044   14960 round_trippers.go:580]     Audit-Id: 3cce2461-88c3-4efd-a922-113ef0176de6
	I0419 18:58:33.998044   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:33.998044   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:33.998044   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:33.998044   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:33.998125   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:33 GMT
	I0419 18:58:33.998703   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:33.999045   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:34.478396   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:34.478580   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:34.478580   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:34.478580   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:34.483102   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:34.484220   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:34.484245   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:34.484245   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:34.484245   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:34.484351   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:34.484351   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:34 GMT
	I0419 18:58:34.484379   14960 round_trippers.go:580]     Audit-Id: 7990ce83-830a-406c-bedc-1b471a256f80
	I0419 18:58:34.484576   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:34.979897   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:34.979897   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:34.980138   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:34.980138   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:34.986821   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:58:34.986821   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:34.986821   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:34 GMT
	I0419 18:58:34.986821   14960 round_trippers.go:580]     Audit-Id: d438732c-5f71-46fe-a51f-6324c857fcb3
	I0419 18:58:34.986821   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:34.986821   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:34.986821   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:34.986821   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:34.987479   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:35.487415   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:35.487415   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:35.487415   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:35.487415   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:35.492111   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:35.492111   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:35.492507   14960 round_trippers.go:580]     Audit-Id: f0684838-0d5b-46fe-873f-9307f8f29e58
	I0419 18:58:35.492507   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:35.492507   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:35.492507   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:35.492507   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:35.492563   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:35 GMT
	I0419 18:58:35.493149   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:35.987947   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:35.987947   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:35.987947   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:35.987947   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:35.992540   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:35.993000   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:35.993000   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:35.993000   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:35.993000   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:35 GMT
	I0419 18:58:35.993000   14960 round_trippers.go:580]     Audit-Id: 7da89ce3-966b-4635-8ded-cbe3a7720279
	I0419 18:58:35.993000   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:35.993000   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:35.993568   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:36.491899   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:36.491899   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:36.491899   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:36.491899   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:36.495733   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:36.495733   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:36.495733   14960 round_trippers.go:580]     Audit-Id: 39d1e45d-df40-41b2-a6be-b6569e69f885
	I0419 18:58:36.495733   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:36.495733   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:36.495733   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:36.495733   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:36.495733   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:36 GMT
	I0419 18:58:36.498301   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:36.498301   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:36.989042   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:36.989042   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:36.989042   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:36.989042   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:36.992655   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:36.992936   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:36.992936   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:36.992936   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:36.992936   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:36.993042   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:36 GMT
	I0419 18:58:36.993042   14960 round_trippers.go:580]     Audit-Id: e758599f-a7ce-4323-8b61-4c8330646142
	I0419 18:58:36.993042   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:36.993223   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:37.480047   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:37.480105   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:37.480162   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:37.480162   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:37.484810   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:37.485835   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:37.485835   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:37.485835   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:37.485835   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:37.485835   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:37.485835   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:37 GMT
	I0419 18:58:37.485835   14960 round_trippers.go:580]     Audit-Id: a80fdb9a-a5bd-4899-a07f-2f927f422a4e
	I0419 18:58:37.485835   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:37.978082   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:37.978082   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:37.978082   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:37.978082   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:37.982673   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:37.983234   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:37.983234   14960 round_trippers.go:580]     Audit-Id: c8b7a7d4-9fef-4e1c-a142-0ae3273042b5
	I0419 18:58:37.983315   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:37.983383   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:37.983458   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:37.983579   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:37.983951   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:37 GMT
	I0419 18:58:37.983995   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:38.478420   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:38.478597   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:38.478597   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:38.478597   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:38.483044   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:38.483044   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:38.483435   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:38 GMT
	I0419 18:58:38.483435   14960 round_trippers.go:580]     Audit-Id: 59e612c9-8ee3-4ec9-bb2f-040f12903b73
	I0419 18:58:38.483435   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:38.483478   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:38.483478   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:38.483478   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:38.483478   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:38.992775   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:38.992775   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:38.992775   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:38.992775   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:38.997391   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:38.997678   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:38.997678   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:38.997678   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:38.997678   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:38 GMT
	I0419 18:58:38.997678   14960 round_trippers.go:580]     Audit-Id: aa3a7d39-2a6e-4cdd-b0c7-993a7f6810b5
	I0419 18:58:38.997678   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:38.997678   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:38.997939   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:38.998405   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:39.478348   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:39.478455   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:39.478455   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:39.478455   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:39.482274   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:39.482274   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:39.482607   14960 round_trippers.go:580]     Audit-Id: 2bb32623-0683-4ce3-ab82-7bf09fe69820
	I0419 18:58:39.482607   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:39.482607   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:39.482607   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:39.482607   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:39.482607   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:39 GMT
	I0419 18:58:39.482665   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:39.979580   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:39.979580   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:39.979580   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:39.979673   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:39.983028   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:39.983028   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:39.983762   14960 round_trippers.go:580]     Audit-Id: 57250717-021f-4608-8915-7976dba89df6
	I0419 18:58:39.983762   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:39.983762   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:39.983762   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:39.983762   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:39.983762   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:39 GMT
	I0419 18:58:39.983947   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:40.479580   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:40.479580   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:40.479580   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:40.479580   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:40.483201   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:40.483201   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:40.483201   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:40.483201   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:40 GMT
	I0419 18:58:40.483201   14960 round_trippers.go:580]     Audit-Id: fcdf4806-7432-46d5-b0ba-8c814f2a72b8
	I0419 18:58:40.483201   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:40.483201   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:40.483201   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:40.484393   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1901","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0419 18:58:40.485002   14960 node_ready.go:49] node "multinode-348000" has status "Ready":"True"
	I0419 18:58:40.485002   14960 node_ready.go:38] duration metric: took 34.5075655s for node "multinode-348000" to be "Ready" ...
	I0419 18:58:40.485002   14960 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 18:58:40.485187   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods
	I0419 18:58:40.485187   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:40.485187   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:40.485187   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:40.491960   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:58:40.491960   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:40.491960   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:40.492179   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:40.492179   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:40.492179   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:40 GMT
	I0419 18:58:40.492179   14960 round_trippers.go:580]     Audit-Id: 7c96723e-ab3e-495e-9131-b60af96c0f86
	I0419 18:58:40.492179   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:40.493699   14960 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1901"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86508 chars]
	I0419 18:58:40.498270   14960 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace to be "Ready" ...
	I0419 18:58:40.498514   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:40.498571   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:40.498595   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:40.498595   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:40.501450   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:40.501450   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:40.501450   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:40.501450   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:40.501450   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:40 GMT
	I0419 18:58:40.501450   14960 round_trippers.go:580]     Audit-Id: 262fd8ff-e2ea-4238-9261-a77d31124661
	I0419 18:58:40.501450   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:40.501450   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:40.501450   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:40.501450   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:40.501450   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:40.501450   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:40.501450   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:40.504458   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:40.504458   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:40.504458   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:40.504458   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:40 GMT
	I0419 18:58:40.504458   14960 round_trippers.go:580]     Audit-Id: 9bead262-6e5d-4fc7-8512-d581f960899d
	I0419 18:58:40.504458   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:40.504458   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:40.504458   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:40.505438   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1901","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0419 18:58:41.010413   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:41.010413   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:41.010413   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:41.010413   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:41.016549   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:58:41.016549   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:41.016549   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:41.016549   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:41.016549   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:41.016549   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:41.016549   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:41 GMT
	I0419 18:58:41.016549   14960 round_trippers.go:580]     Audit-Id: c0625f30-a712-402f-937a-6c78a76b7102
	I0419 18:58:41.016549   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:41.017670   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:41.017670   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:41.017755   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:41.017755   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:41.020594   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:41.020594   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:41.020594   14960 round_trippers.go:580]     Audit-Id: f42508d5-a2c5-4729-af66-b9722c159054
	I0419 18:58:41.021166   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:41.021166   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:41.021166   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:41.021166   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:41.021166   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:41 GMT
	I0419 18:58:41.021443   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1901","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0419 18:58:41.513965   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:41.514023   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:41.514023   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:41.514023   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:41.519082   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:41.519082   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:41.519082   14960 round_trippers.go:580]     Audit-Id: 1765940f-e494-44b8-9bf6-962e270c084e
	I0419 18:58:41.519082   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:41.519082   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:41.519082   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:41.519082   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:41.519184   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:41 GMT
	I0419 18:58:41.519184   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:41.520067   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:41.520157   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:41.520157   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:41.520157   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:41.523650   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:41.523650   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:41.523650   14960 round_trippers.go:580]     Audit-Id: 21ba301e-0b7e-4d46-8403-882f74962b6c
	I0419 18:58:41.523650   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:41.523650   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:41.523650   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:41.523650   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:41.523650   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:41 GMT
	I0419 18:58:41.523650   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1901","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0419 18:58:42.012364   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:42.012440   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:42.012519   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:42.012519   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:42.016760   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:42.016760   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:42.016760   14960 round_trippers.go:580]     Audit-Id: 432744f6-25be-4bcf-b2b8-92e595f26fb5
	I0419 18:58:42.016760   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:42.016760   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:42.016760   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:42.016760   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:42.016760   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:42 GMT
	I0419 18:58:42.017546   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:42.018325   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:42.018325   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:42.018421   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:42.018421   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:42.021156   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:42.021156   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:42.021156   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:42.021156   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:42.021478   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:42.021478   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:42.021478   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:42 GMT
	I0419 18:58:42.021478   14960 round_trippers.go:580]     Audit-Id: 0faf17a2-ca47-45f9-9864-5845f3737a8d
	I0419 18:58:42.021917   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1901","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0419 18:58:42.500197   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:42.500197   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:42.500197   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:42.500197   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:42.505796   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:42.505851   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:42.505851   14960 round_trippers.go:580]     Audit-Id: 1b6a65bd-9010-4d7c-a24a-815e6aee4e0a
	I0419 18:58:42.505917   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:42.505917   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:42.505945   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:42.505945   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:42.505945   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:42 GMT
	I0419 18:58:42.506130   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:42.506770   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:42.506908   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:42.506908   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:42.506908   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:42.510541   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:42.510541   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:42.510541   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:42 GMT
	I0419 18:58:42.511068   14960 round_trippers.go:580]     Audit-Id: f1c31540-1623-47ca-816a-c285ef546234
	I0419 18:58:42.511068   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:42.511068   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:42.511068   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:42.511068   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:42.511294   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1901","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0419 18:58:42.511294   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:58:42.999996   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:42.999996   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:42.999996   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:42.999996   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:43.003904   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:43.003904   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:43.003904   14960 round_trippers.go:580]     Audit-Id: 38130497-b08a-4540-ab78-f0dcd04f45a0
	I0419 18:58:43.003904   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:43.003904   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:43.003904   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:43.003904   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:43.003904   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:43 GMT
	I0419 18:58:43.011664   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:43.012620   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:43.012681   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:43.012727   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:43.012727   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:43.015419   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:43.015904   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:43.015904   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:43.015904   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:43.015904   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:43.015904   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:43.015904   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:43 GMT
	I0419 18:58:43.015904   14960 round_trippers.go:580]     Audit-Id: 2ebc50f9-46dc-44f4-a06d-ef359a840493
	I0419 18:58:43.015904   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1901","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0419 18:58:43.502638   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:43.502737   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:43.502737   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:43.502737   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:43.507182   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:43.507182   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:43.507182   14960 round_trippers.go:580]     Audit-Id: 8a0bdd57-3646-41c8-985d-7cb28ad124d7
	I0419 18:58:43.507182   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:43.507322   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:43.507322   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:43.507322   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:43.507322   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:43 GMT
	I0419 18:58:43.507448   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:43.508200   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:43.508299   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:43.508299   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:43.508299   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:43.510746   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:43.511422   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:43.511422   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:43.511422   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:43.511422   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:43 GMT
	I0419 18:58:43.511422   14960 round_trippers.go:580]     Audit-Id: 037f0cdd-9853-44e6-8c3e-02d4eb9b0885
	I0419 18:58:43.511422   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:43.511422   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:43.511707   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:44.004357   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:44.004357   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:44.004357   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:44.004357   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:44.007999   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:44.007999   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:44.008965   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:44.008965   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:44 GMT
	I0419 18:58:44.008965   14960 round_trippers.go:580]     Audit-Id: f4d297f8-e5fa-4c02-8821-ec698c6c99ee
	I0419 18:58:44.009033   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:44.009033   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:44.009033   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:44.009341   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:44.009917   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:44.009917   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:44.009917   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:44.009917   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:44.012540   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:44.012540   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:44.012540   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:44.012540   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:44 GMT
	I0419 18:58:44.013586   14960 round_trippers.go:580]     Audit-Id: 85776f77-d624-4ff2-b744-427d4a063e7f
	I0419 18:58:44.013586   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:44.013586   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:44.013586   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:44.013908   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:44.506747   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:44.506849   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:44.506849   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:44.506985   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:44.510148   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:44.510988   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:44.510988   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:44.510988   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:44.510988   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:44.510988   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:44 GMT
	I0419 18:58:44.510988   14960 round_trippers.go:580]     Audit-Id: 15d31706-61ed-4941-be75-217af00039d1
	I0419 18:58:44.510988   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:44.511355   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:44.512387   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:44.512497   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:44.512497   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:44.512497   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:44.518174   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:44.518174   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:44.518174   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:44.518174   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:44.518174   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:44 GMT
	I0419 18:58:44.518174   14960 round_trippers.go:580]     Audit-Id: 2783d27d-2bb7-460d-8316-5c3bda8ca857
	I0419 18:58:44.518174   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:44.518174   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:44.518174   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:44.518940   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:58:45.003173   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:45.003173   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:45.003173   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:45.003173   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:45.008956   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:45.008956   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:45.008956   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:45.008956   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:45 GMT
	I0419 18:58:45.008956   14960 round_trippers.go:580]     Audit-Id: eda1ce7d-4833-4d57-8ce8-6f929e1ea4ff
	I0419 18:58:45.008956   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:45.008956   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:45.008956   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:45.009310   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:45.010135   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:45.010181   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:45.010181   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:45.010181   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:45.012222   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:45.012699   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:45.012699   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:45 GMT
	I0419 18:58:45.012699   14960 round_trippers.go:580]     Audit-Id: 7c7aad20-cf5d-40f8-9038-1f5df97ce0d4
	I0419 18:58:45.012699   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:45.012699   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:45.012770   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:45.012770   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:45.013048   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:45.507116   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:45.507116   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:45.507116   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:45.507116   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:45.510189   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:45.510553   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:45.510627   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:45.510627   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:45.510627   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:45 GMT
	I0419 18:58:45.510627   14960 round_trippers.go:580]     Audit-Id: 2459bd4a-db4e-4728-9b4c-98a1e8754a66
	I0419 18:58:45.510627   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:45.510627   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:45.510762   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:45.511687   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:45.511687   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:45.511738   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:45.511738   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:45.517981   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:58:45.518090   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:45.518090   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:45.518090   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:45 GMT
	I0419 18:58:45.518090   14960 round_trippers.go:580]     Audit-Id: ee6d2569-c663-440a-a654-db5cc68b697b
	I0419 18:58:45.518090   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:45.518152   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:45.518152   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:45.518402   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:46.010062   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:46.010186   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:46.010186   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:46.010186   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:46.013631   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:46.013631   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:46.013631   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:46.013631   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:46.013631   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:46.013631   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:46.013631   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:46 GMT
	I0419 18:58:46.014264   14960 round_trippers.go:580]     Audit-Id: 7bb04b11-ebe6-4689-8cd3-c36985f92408
	I0419 18:58:46.014451   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:46.014883   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:46.014883   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:46.014883   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:46.014883   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:46.018492   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:46.018492   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:46.018492   14960 round_trippers.go:580]     Audit-Id: cb2779a4-ccb7-4985-b8c2-9edd7fd289ee
	I0419 18:58:46.018492   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:46.018492   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:46.018492   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:46.018492   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:46.018492   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:46 GMT
	I0419 18:58:46.019069   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:46.511584   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:46.511584   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:46.511584   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:46.511584   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:46.516442   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:46.516442   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:46.516442   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:46.516442   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:46.516442   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:46.516442   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:46 GMT
	I0419 18:58:46.516442   14960 round_trippers.go:580]     Audit-Id: 0b0946b4-2809-423c-9544-fa5f379590c4
	I0419 18:58:46.516442   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:46.516746   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:46.517523   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:46.517628   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:46.517628   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:46.517628   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:46.520686   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:46.520686   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:46.521376   14960 round_trippers.go:580]     Audit-Id: d977d06f-9214-44cb-83b8-1c2718ecec88
	I0419 18:58:46.521376   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:46.521376   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:46.521376   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:46.521376   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:46.521376   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:46 GMT
	I0419 18:58:46.521563   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:46.522336   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:58:47.013684   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:47.013684   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:47.013684   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:47.013684   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:47.018307   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:47.018614   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:47.018614   14960 round_trippers.go:580]     Audit-Id: b17a08a2-deac-4e8a-80ca-3e0169e742b5
	I0419 18:58:47.018614   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:47.018614   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:47.018614   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:47.018614   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:47.018614   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:47 GMT
	I0419 18:58:47.018838   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:47.020013   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:47.020013   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:47.020013   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:47.020013   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:47.023303   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:47.023843   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:47.023843   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:47 GMT
	I0419 18:58:47.023843   14960 round_trippers.go:580]     Audit-Id: 86b68404-8558-403b-89b8-468e97477cbc
	I0419 18:58:47.023843   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:47.023843   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:47.023843   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:47.023843   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:47.024144   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:47.512418   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:47.512418   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:47.512514   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:47.512514   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:47.515892   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:47.516502   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:47.516502   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:47.516502   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:47.516502   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:47 GMT
	I0419 18:58:47.516502   14960 round_trippers.go:580]     Audit-Id: a3394d08-f24c-4e61-ab6d-0f3bd3e5b9ac
	I0419 18:58:47.516502   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:47.516502   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:47.516901   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:47.517624   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:47.517791   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:47.517791   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:47.517791   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:47.525683   14960 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 18:58:47.525683   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:47.525683   14960 round_trippers.go:580]     Audit-Id: 8598d40a-8430-4bf7-afe4-93f678b5c758
	I0419 18:58:47.525683   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:47.525683   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:47.525683   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:47.525683   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:47.525683   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:47 GMT
	I0419 18:58:47.525683   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:48.011314   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:48.011314   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:48.011395   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:48.011395   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:48.014911   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:48.014911   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:48.014911   14960 round_trippers.go:580]     Audit-Id: 1f5b3f54-4f6d-4d7a-941a-1bdba1686f07
	I0419 18:58:48.014911   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:48.015771   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:48.015771   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:48.015771   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:48.015771   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:48 GMT
	I0419 18:58:48.016059   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:48.016825   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:48.016825   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:48.016825   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:48.016825   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:48.019045   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:48.019045   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:48.019045   14960 round_trippers.go:580]     Audit-Id: 6019d7ca-c58e-4927-8795-94668e15ef17
	I0419 18:58:48.020060   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:48.020060   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:48.020060   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:48.020060   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:48.020060   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:48 GMT
	I0419 18:58:48.020434   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:48.513462   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:48.513462   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:48.513462   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:48.513462   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:48.520292   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:58:48.520850   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:48.520850   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:48.520850   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:48.520850   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:48 GMT
	I0419 18:58:48.520945   14960 round_trippers.go:580]     Audit-Id: 3b151ef3-62e5-4321-9357-841370841fd0
	I0419 18:58:48.520978   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:48.520978   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:48.520978   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:48.521802   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:48.521802   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:48.521802   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:48.521802   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:48.524516   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:48.524516   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:48.524516   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:48.524516   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:48 GMT
	I0419 18:58:48.524516   14960 round_trippers.go:580]     Audit-Id: 958b8c88-76b8-4622-b0e8-989840ad5c5c
	I0419 18:58:48.524516   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:48.524516   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:48.524516   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:48.526019   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:48.526564   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:58:49.010936   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:49.010936   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:49.010936   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:49.010936   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:49.014527   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:49.015578   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:49.015578   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:49.015578   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:49.015578   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:49 GMT
	I0419 18:58:49.015578   14960 round_trippers.go:580]     Audit-Id: c45c5942-1a77-4be5-b9ba-94f619bcde8f
	I0419 18:58:49.015578   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:49.015578   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:49.015578   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:49.016767   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:49.016832   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:49.016832   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:49.016832   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:49.020651   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:49.020651   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:49.020651   14960 round_trippers.go:580]     Audit-Id: aff4742f-c407-425a-b1bc-0d1a2f93d69a
	I0419 18:58:49.020651   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:49.020651   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:49.020651   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:49.020841   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:49.020841   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:49 GMT
	I0419 18:58:49.020992   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:49.509753   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:49.509936   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:49.509936   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:49.509936   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:49.513498   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:49.514526   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:49.514570   14960 round_trippers.go:580]     Audit-Id: 0758c54c-524b-4e9f-8a09-9e995f3075fc
	I0419 18:58:49.514681   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:49.514681   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:49.514681   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:49.514681   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:49.514681   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:49 GMT
	I0419 18:58:49.514928   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:49.515749   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:49.515749   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:49.515749   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:49.515835   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:49.518499   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:49.518499   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:49.518499   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:49 GMT
	I0419 18:58:49.518688   14960 round_trippers.go:580]     Audit-Id: a7e0c727-aee0-40fc-a1ae-9030dee06eda
	I0419 18:58:49.518688   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:49.518688   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:49.518688   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:49.518688   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:49.519180   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:50.009571   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:50.009571   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:50.009571   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:50.009571   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:50.014182   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:50.014182   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:50.014182   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:50.014182   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:50.014308   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:50.014308   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:50 GMT
	I0419 18:58:50.014308   14960 round_trippers.go:580]     Audit-Id: 0fd45140-7851-441f-ad86-173b46e5e47e
	I0419 18:58:50.014308   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:50.014375   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:50.015446   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:50.015446   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:50.015446   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:50.015542   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:50.017880   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:50.018880   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:50.018880   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:50.018880   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:50.018880   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:50.018880   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:50 GMT
	I0419 18:58:50.018880   14960 round_trippers.go:580]     Audit-Id: 43dbd2b3-4e19-4ec0-b0ea-7ee0ba70a166
	I0419 18:58:50.018880   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:50.018880   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:50.506222   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:50.506493   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:50.506493   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:50.506493   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:50.509930   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:50.509930   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:50.509930   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:50.509930   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:50 GMT
	I0419 18:58:50.509930   14960 round_trippers.go:580]     Audit-Id: fef51eaa-9269-49ed-a54a-e069f1402030
	I0419 18:58:50.509930   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:50.510919   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:50.510919   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:50.511110   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:50.511966   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:50.512035   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:50.512035   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:50.512035   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:50.514699   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:50.514699   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:50.514699   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:50 GMT
	I0419 18:58:50.515194   14960 round_trippers.go:580]     Audit-Id: dcdd70dc-a934-4a77-b83f-7520e2e9e133
	I0419 18:58:50.515194   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:50.515194   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:50.515194   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:50.515301   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:50.515515   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:51.004734   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:51.004734   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:51.004734   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:51.004734   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:51.008744   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:51.009549   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:51.009549   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:51.009549   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:51.009549   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:51.009549   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:51 GMT
	I0419 18:58:51.009549   14960 round_trippers.go:580]     Audit-Id: b617140e-68e1-47b8-b2b4-111f39118d39
	I0419 18:58:51.009640   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:51.009890   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:51.010995   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:51.011081   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:51.011081   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:51.011081   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:51.015033   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:51.015033   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:51.015033   14960 round_trippers.go:580]     Audit-Id: 7e7a4518-de21-40dc-8993-d243bb1dd849
	I0419 18:58:51.015033   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:51.015033   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:51.015223   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:51.015223   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:51.015223   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:51 GMT
	I0419 18:58:51.015223   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:51.016160   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:58:51.503559   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:51.503559   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:51.503559   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:51.503559   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:51.507167   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:51.507167   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:51.508146   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:51.508169   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:51.508169   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:51.508169   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:51 GMT
	I0419 18:58:51.508169   14960 round_trippers.go:580]     Audit-Id: 1beb3eff-e1fb-4c08-89da-b3aac0f1124a
	I0419 18:58:51.508169   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:51.508756   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:51.509545   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:51.509681   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:51.509681   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:51.509681   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:51.513021   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:51.513021   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:51.513021   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:51 GMT
	I0419 18:58:51.513021   14960 round_trippers.go:580]     Audit-Id: 3aa074e9-c0f6-40fc-ad77-6a5f48c89484
	I0419 18:58:51.513021   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:51.513021   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:51.513021   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:51.513021   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:51.513581   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:52.001462   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:52.001462   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:52.001462   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:52.001462   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:52.005136   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:52.005136   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:52.005136   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:52.005136   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:52.005136   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:52.005136   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:52.005136   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:52 GMT
	I0419 18:58:52.005136   14960 round_trippers.go:580]     Audit-Id: 97d85d44-715c-416f-810a-0faddabd4dfd
	I0419 18:58:52.005136   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:52.006791   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:52.006928   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:52.006928   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:52.007012   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:52.010365   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:52.010365   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:52.010365   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:52.010365   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:52.010365   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:52.010365   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:52 GMT
	I0419 18:58:52.010365   14960 round_trippers.go:580]     Audit-Id: 25ac4cba-a0de-4b5f-9a68-4919db795540
	I0419 18:58:52.010365   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:52.010365   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:52.499815   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:52.499868   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:52.499868   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:52.499868   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:52.503714   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:52.504370   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:52.504370   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:52.504370   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:52 GMT
	I0419 18:58:52.504370   14960 round_trippers.go:580]     Audit-Id: 357cd200-9af7-4b5d-97e9-224d193eae73
	I0419 18:58:52.504370   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:52.504370   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:52.504370   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:52.504624   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:52.505018   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:52.505018   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:52.505018   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:52.505018   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:52.510855   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:52.510855   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:52.510855   14960 round_trippers.go:580]     Audit-Id: 749b5414-bf8c-45d2-9622-49bec90f465e
	I0419 18:58:52.510855   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:52.510855   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:52.510855   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:52.510855   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:52.510855   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:52 GMT
	I0419 18:58:52.510855   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:53.002426   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:53.002471   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:53.002471   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:53.002471   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:53.006426   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:53.007129   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:53.007203   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:53 GMT
	I0419 18:58:53.007203   14960 round_trippers.go:580]     Audit-Id: 491108d3-e699-4762-b791-1915b7fcb83b
	I0419 18:58:53.007203   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:53.007203   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:53.007203   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:53.007203   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:53.007524   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:53.007746   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:53.007746   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:53.007746   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:53.007746   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:53.010479   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:53.010479   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:53.010479   14960 round_trippers.go:580]     Audit-Id: 2aa824ef-1ff1-4806-aeb2-492a07079c6e
	I0419 18:58:53.010479   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:53.010479   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:53.011506   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:53.011506   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:53.011506   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:53 GMT
	I0419 18:58:53.011667   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:53.500307   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:53.500307   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:53.500307   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:53.500406   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:53.504977   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:53.504977   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:53.504977   14960 round_trippers.go:580]     Audit-Id: e410c4e5-a8f0-46c8-8624-ce0c1ee8eb22
	I0419 18:58:53.504977   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:53.505065   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:53.505065   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:53.505065   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:53.505065   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:53 GMT
	I0419 18:58:53.505322   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:53.506213   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:53.506213   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:53.506279   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:53.506279   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:53.508662   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:53.508662   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:53.508662   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:53.508662   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:53 GMT
	I0419 18:58:53.508662   14960 round_trippers.go:580]     Audit-Id: 08d64dc2-74ca-4e2d-b9ef-cdb78bdd3955
	I0419 18:58:53.508662   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:53.508662   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:53.508662   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:53.513178   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:53.513178   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:58:53.999467   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:53.999467   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:53.999467   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:53.999467   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:54.004414   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:54.004414   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:54.004520   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:54.004520   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:54.004520   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:54.004520   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:54.004520   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:54 GMT
	I0419 18:58:54.004520   14960 round_trippers.go:580]     Audit-Id: e7f4d3c5-a75d-4481-8d83-997ff25b7c09
	I0419 18:58:54.005521   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:54.006662   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:54.006662   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:54.006662   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:54.006662   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:54.010065   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:54.010065   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:54.010065   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:54 GMT
	I0419 18:58:54.010337   14960 round_trippers.go:580]     Audit-Id: 7f9704e7-cd3f-4a03-8bf0-118a39946eba
	I0419 18:58:54.010337   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:54.010337   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:54.010337   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:54.010337   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:54.010701   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:54.513136   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:54.513367   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:54.513367   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:54.513367   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:54.518936   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:54.518936   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:54.518936   14960 round_trippers.go:580]     Audit-Id: b11c20dc-7692-4af5-b5e2-bb3be0ead9d6
	I0419 18:58:54.519494   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:54.519494   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:54.519494   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:54.519494   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:54.519494   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:54 GMT
	I0419 18:58:54.519705   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:54.520373   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:54.520492   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:54.520492   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:54.520492   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:54.523846   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:54.524197   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:54.524197   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:54.524197   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:54.524243   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:54.524243   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:54.524243   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:54 GMT
	I0419 18:58:54.524243   14960 round_trippers.go:580]     Audit-Id: 0bc51c75-d9cf-4afe-94a8-6b8abe378ab6
	I0419 18:58:54.524275   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:55.012471   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:55.012471   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:55.012471   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:55.012471   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:55.016049   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:55.016049   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:55.016049   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:55.016049   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:55 GMT
	I0419 18:58:55.016049   14960 round_trippers.go:580]     Audit-Id: 4d7a9db6-2056-402f-a1a5-137ce4c25d84
	I0419 18:58:55.016049   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:55.016630   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:55.016630   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:55.017587   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:55.018280   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:55.018280   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:55.018280   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:55.018280   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:55.020867   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:55.020867   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:55.020867   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:55.020867   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:55.020867   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:55.020867   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:55 GMT
	I0419 18:58:55.020867   14960 round_trippers.go:580]     Audit-Id: d1059101-eeb0-4cb2-b0f8-f0d0c7d9ef99
	I0419 18:58:55.020867   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:55.021669   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:55.501684   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:55.501791   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:55.501791   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:55.501791   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:55.505144   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:55.505557   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:55.505557   14960 round_trippers.go:580]     Audit-Id: 55c7be2a-6365-46ce-8f95-91a0a2e67773
	I0419 18:58:55.505557   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:55.505557   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:55.505557   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:55.505557   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:55.505557   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:55 GMT
	I0419 18:58:55.505967   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:55.506845   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:55.506845   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:55.506845   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:55.506845   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:55.511660   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:55.512186   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:55.512186   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:55.512186   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:55 GMT
	I0419 18:58:55.512186   14960 round_trippers.go:580]     Audit-Id: d0a750ff-7bdd-4061-af4d-4b88893a553f
	I0419 18:58:55.512186   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:55.512186   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:55.512186   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:55.512383   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:56.001930   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:56.002032   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:56.002032   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:56.002032   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:56.005975   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:56.006296   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:56.006397   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:56.006397   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:56 GMT
	I0419 18:58:56.006397   14960 round_trippers.go:580]     Audit-Id: d81c54bc-a6c1-48cc-ac44-3b9cdeea4d7f
	I0419 18:58:56.006397   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:56.006397   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:56.006397   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:56.006542   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:56.007566   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:56.007566   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:56.007651   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:56.007651   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:56.010789   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:56.010900   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:56.010939   14960 round_trippers.go:580]     Audit-Id: c8baf146-e2ca-4588-ae9c-09d2a23ce8f7
	I0419 18:58:56.010939   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:56.010939   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:56.010939   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:56.010986   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:56.010986   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:56 GMT
	I0419 18:58:56.011439   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:56.011439   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:58:56.500228   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:56.500506   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:56.500506   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:56.500506   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:56.505887   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:56.506799   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:56.506799   14960 round_trippers.go:580]     Audit-Id: 936953f8-729f-4bfd-9e01-b52403f31203
	I0419 18:58:56.506799   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:56.506799   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:56.506799   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:56.506875   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:56.506875   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:56 GMT
	I0419 18:58:56.507073   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:56.507886   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:56.507972   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:56.507972   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:56.507972   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:56.510970   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:56.511837   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:56.511837   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:56.511837   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:56.511837   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:56.511837   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:56.511837   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:56 GMT
	I0419 18:58:56.511837   14960 round_trippers.go:580]     Audit-Id: c76c2af0-fbdf-46da-96be-e3956002b641
	I0419 18:58:56.512168   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:57.012579   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:57.012579   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:57.012579   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:57.012579   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:57.016180   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:57.016862   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:57.016862   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:57.016862   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:57.016926   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:57.016926   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:57 GMT
	I0419 18:58:57.016926   14960 round_trippers.go:580]     Audit-Id: 50ef15e1-0670-4d21-9630-0ffec2d58ff7
	I0419 18:58:57.016926   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:57.017186   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:57.017506   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:57.017506   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:57.017506   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:57.017506   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:57.023081   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:57.023081   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:57.023081   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:57.023081   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:57 GMT
	I0419 18:58:57.023081   14960 round_trippers.go:580]     Audit-Id: 0aeb00b0-ba93-4b86-b408-2259ba7d36f9
	I0419 18:58:57.023081   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:57.023081   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:57.023081   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:57.023081   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:57.511101   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:57.511101   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:57.511242   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:57.511242   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:57.515145   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:57.515876   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:57.515876   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:57.515876   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:57.515876   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:57.515876   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:57 GMT
	I0419 18:58:57.515876   14960 round_trippers.go:580]     Audit-Id: a817d97f-3c7f-44dc-9116-54701b724a43
	I0419 18:58:57.515876   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:57.516041   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:57.516930   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:57.517010   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:57.517010   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:57.517010   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:57.520227   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:57.520227   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:57.520227   14960 round_trippers.go:580]     Audit-Id: 959dc126-1d59-416b-adee-c94f879a422b
	I0419 18:58:57.520527   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:57.520527   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:57.520527   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:57.520527   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:57.520527   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:57 GMT
	I0419 18:58:57.520637   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:58.012663   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:58.012663   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:58.012663   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:58.012663   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:58.018936   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:58:58.018936   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:58.018936   14960 round_trippers.go:580]     Audit-Id: 130eded6-9525-4de3-b78c-80914fe8c554
	I0419 18:58:58.019557   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:58.019557   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:58.019557   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:58.019557   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:58.019557   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:58 GMT
	I0419 18:58:58.019875   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:58.020687   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:58.020687   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:58.020687   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:58.020687   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:58.023623   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:58.023623   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:58.023623   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:58.023986   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:58.023986   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:58 GMT
	I0419 18:58:58.023986   14960 round_trippers.go:580]     Audit-Id: 8bff6465-a344-4dfc-89cc-23f13cbd1eab
	I0419 18:58:58.023986   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:58.023986   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:58.024360   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:58.024849   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:58:58.499426   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:58.499426   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:58.499521   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:58.499521   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:58.504473   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:58.504473   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:58.504473   14960 round_trippers.go:580]     Audit-Id: f62344a5-d5b7-4b71-833e-99f5c94e9df7
	I0419 18:58:58.504536   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:58.504536   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:58.504536   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:58.504536   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:58.504536   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:58 GMT
	I0419 18:58:58.504744   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:58.505825   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:58.505877   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:58.505877   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:58.505877   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:58.513362   14960 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 18:58:58.513362   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:58.513362   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:58.513362   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:58.513362   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:58.513362   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:58.513362   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:58 GMT
	I0419 18:58:58.513362   14960 round_trippers.go:580]     Audit-Id: 5c7ff8af-1fc8-4c46-9775-bd89bb824d2c
	I0419 18:58:58.515125   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:59.014064   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:59.014064   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:59.014064   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:59.014064   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:59.018725   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:59.018725   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:59.018725   14960 round_trippers.go:580]     Audit-Id: 6db8b4c9-bce4-493b-98ce-3e79fb242698
	I0419 18:58:59.019579   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:59.019579   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:59.019579   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:59.019579   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:59.019579   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:59 GMT
	I0419 18:58:59.019831   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:59.020334   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:59.020334   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:59.020334   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:59.020334   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:59.023228   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:59.023228   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:59.023228   14960 round_trippers.go:580]     Audit-Id: 1176868a-c5a1-4b96-a026-1e089fa39aed
	I0419 18:58:59.023228   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:59.023228   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:59.023228   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:59.024229   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:59.024229   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:59 GMT
	I0419 18:58:59.024570   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:59.501412   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:59.501412   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:59.501412   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:59.501412   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:59.505091   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:59.505091   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:59.505520   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:59.505520   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:59.505520   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:59.505520   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:59 GMT
	I0419 18:58:59.505520   14960 round_trippers.go:580]     Audit-Id: e1bb8c4f-ef14-41ad-9636-e3c6440e65b9
	I0419 18:58:59.505520   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:59.505642   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:59.506293   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:59.506293   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:59.506293   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:59.506293   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:59.508922   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:59.508922   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:59.509918   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:59 GMT
	I0419 18:58:59.509918   14960 round_trippers.go:580]     Audit-Id: e4c4c918-1ac1-4da5-b401-a418aa104662
	I0419 18:58:59.509918   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:59.509961   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:59.509961   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:59.509961   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:59.510315   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:00.003096   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:00.003266   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:00.003266   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:00.003266   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:00.006954   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:00.007729   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:00.007729   14960 round_trippers.go:580]     Audit-Id: 74b62b74-b90c-4ac0-8dfe-0b106b05cf3e
	I0419 18:59:00.007729   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:00.007729   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:00.007729   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:00.007729   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:00.007729   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:00 GMT
	I0419 18:59:00.007791   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:00.008661   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:00.008759   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:00.008916   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:00.008916   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:00.012123   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:00.012123   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:00.012123   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:00 GMT
	I0419 18:59:00.012123   14960 round_trippers.go:580]     Audit-Id: deeb82b0-7ce0-406e-90dc-9d4d63109604
	I0419 18:59:00.012123   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:00.012123   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:00.012123   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:00.012123   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:00.012785   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:00.502120   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:00.502120   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:00.502120   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:00.502120   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:00.507096   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:00.507096   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:00.507096   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:00.507096   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:00.507096   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:00.507096   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:00 GMT
	I0419 18:59:00.507096   14960 round_trippers.go:580]     Audit-Id: 567592b3-545b-42a2-aab3-6b51f08293c5
	I0419 18:59:00.507096   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:00.507096   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:00.508181   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:00.508265   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:00.508265   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:00.508338   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:00.510672   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:00.511131   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:00.511131   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:00.511131   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:00 GMT
	I0419 18:59:00.511131   14960 round_trippers.go:580]     Audit-Id: 777797ae-d162-4ff7-9caf-e4b150a5facc
	I0419 18:59:00.511131   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:00.511131   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:00.511215   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:00.511522   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:00.511988   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:59:01.004612   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:01.004867   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:01.004867   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:01.004867   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:01.011467   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:59:01.011467   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:01.011467   14960 round_trippers.go:580]     Audit-Id: 7975e44c-afd5-460e-9a1e-ea016ce20729
	I0419 18:59:01.011467   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:01.011467   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:01.011467   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:01.011467   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:01.011467   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:01 GMT
	I0419 18:59:01.011467   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:01.012210   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:01.012210   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:01.012210   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:01.012210   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:01.015558   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:01.015558   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:01.015558   14960 round_trippers.go:580]     Audit-Id: 16f05b91-af11-4637-9245-77432f3b03e1
	I0419 18:59:01.015558   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:01.015558   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:01.015558   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:01.015558   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:01.015558   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:01 GMT
	I0419 18:59:01.016379   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:01.501671   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:01.501671   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:01.501671   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:01.501671   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:01.507260   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:59:01.507449   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:01.507449   14960 round_trippers.go:580]     Audit-Id: 413b0082-06d6-4dff-b059-cd2c76d48f3f
	I0419 18:59:01.507449   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:01.507449   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:01.507449   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:01.507449   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:01.507449   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:01 GMT
	I0419 18:59:01.507549   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:01.507549   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:01.507549   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:01.507549   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:01.507549   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:01.511560   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:01.511560   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:01.511560   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:01.511560   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:01 GMT
	I0419 18:59:01.511560   14960 round_trippers.go:580]     Audit-Id: db3a1cf5-3ab6-4d48-9e8f-24b32c8c05f8
	I0419 18:59:01.511805   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:01.511805   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:01.511805   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:01.512126   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:02.005151   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:02.005151   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:02.005229   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:02.005229   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:02.009102   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:02.009832   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:02.009832   14960 round_trippers.go:580]     Audit-Id: c98490b3-67f7-4399-9985-994bd877d913
	I0419 18:59:02.009832   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:02.009832   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:02.009832   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:02.009832   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:02.009832   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:02 GMT
	I0419 18:59:02.010055   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:02.010974   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:02.010974   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:02.010974   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:02.011096   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:02.016006   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:02.016006   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:02.016006   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:02.016006   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:02.016006   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:02 GMT
	I0419 18:59:02.016006   14960 round_trippers.go:580]     Audit-Id: 69fbbef0-1c12-411e-ad75-3d4dd969686b
	I0419 18:59:02.016006   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:02.016006   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:02.016717   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:02.501212   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:02.501468   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:02.501468   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:02.501468   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:02.506831   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:59:02.506831   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:02.506831   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:02 GMT
	I0419 18:59:02.506831   14960 round_trippers.go:580]     Audit-Id: 96e4394e-e07b-401a-b6e0-28622a2d3e86
	I0419 18:59:02.506831   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:02.506831   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:02.506831   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:02.506831   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:02.506831   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:02.508176   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:02.508176   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:02.508176   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:02.508176   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:02.510748   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:02.510748   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:02.511238   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:02.511238   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:02.511238   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:02 GMT
	I0419 18:59:02.511238   14960 round_trippers.go:580]     Audit-Id: 69dedb6c-2273-48cb-8532-f6ee18a4281b
	I0419 18:59:02.511238   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:02.511238   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:02.511312   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:02.512010   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:59:03.007472   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:03.007472   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:03.007472   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:03.007472   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:03.011089   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:03.011089   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:03.011089   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:03.011516   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:03.011516   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:03.011516   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:03.011516   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:03 GMT
	I0419 18:59:03.011516   14960 round_trippers.go:580]     Audit-Id: e99ebeca-6c53-4334-8236-77a3efe1afe6
	I0419 18:59:03.011575   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:03.012551   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:03.012626   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:03.012626   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:03.012626   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:03.015397   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:03.015397   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:03.016223   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:03.016223   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:03 GMT
	I0419 18:59:03.016223   14960 round_trippers.go:580]     Audit-Id: a8b578ac-9797-4389-a763-7d529c019a00
	I0419 18:59:03.016223   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:03.016223   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:03.016223   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:03.017085   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:03.510406   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:03.510503   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:03.510503   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:03.510503   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:03.513905   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:03.513905   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:03.513905   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:03 GMT
	I0419 18:59:03.513905   14960 round_trippers.go:580]     Audit-Id: b606eb40-7df5-4a63-8177-5657c6f57692
	I0419 18:59:03.514866   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:03.514866   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:03.514866   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:03.514866   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:03.515118   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:03.515783   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:03.515783   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:03.515783   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:03.515783   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:03.519128   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:03.519128   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:03.519220   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:03.519220   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:03.519220   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:03 GMT
	I0419 18:59:03.519220   14960 round_trippers.go:580]     Audit-Id: b769c591-b163-447c-90bc-1092ce12dddc
	I0419 18:59:03.519220   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:03.519220   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:03.519553   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:04.011815   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:04.011942   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:04.011942   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:04.011942   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:04.015874   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:04.016653   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:04.016653   14960 round_trippers.go:580]     Audit-Id: 4dab5c08-30f7-464c-842b-06d5e943f8a6
	I0419 18:59:04.016653   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:04.016653   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:04.016653   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:04.016787   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:04.016787   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:04 GMT
	I0419 18:59:04.016960   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:04.017746   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:04.017746   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:04.017851   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:04.017851   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:04.024090   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:59:04.024647   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:04.024647   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:04.024647   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:04 GMT
	I0419 18:59:04.024647   14960 round_trippers.go:580]     Audit-Id: 4e67dd41-3f9a-4588-8f66-5321d63c9bc8
	I0419 18:59:04.024697   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:04.024697   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:04.024697   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:04.024896   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:04.510605   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:04.510605   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:04.510605   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:04.510605   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:04.515215   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:04.515451   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:04.515451   14960 round_trippers.go:580]     Audit-Id: 0e87fa54-ba63-490e-8b3d-9f9734f6ff85
	I0419 18:59:04.515451   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:04.515451   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:04.515451   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:04.515451   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:04.515451   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:04 GMT
	I0419 18:59:04.515924   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:04.516701   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:04.516701   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:04.516701   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:04.516701   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:04.520034   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:04.520034   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:04.520358   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:04 GMT
	I0419 18:59:04.520358   14960 round_trippers.go:580]     Audit-Id: 83f30d46-0c5a-4fea-a2c6-276a2c6ab27b
	I0419 18:59:04.520358   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:04.520358   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:04.520358   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:04.520358   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:04.520819   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:04.521162   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:59:05.010123   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:05.010123   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:05.010123   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:05.010123   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:05.013667   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:05.013667   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:05.013667   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:05.013667   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:05.013667   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:05 GMT
	I0419 18:59:05.013667   14960 round_trippers.go:580]     Audit-Id: 3a9ef89a-2e33-4597-918b-23dede77582f
	I0419 18:59:05.013667   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:05.013667   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:05.014326   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:05.015088   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:05.015088   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:05.015088   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:05.015088   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:05.019417   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:05.019417   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:05.019417   14960 round_trippers.go:580]     Audit-Id: a484465a-6a6e-4370-be20-d59692ad3e71
	I0419 18:59:05.019417   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:05.019417   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:05.019417   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:05.019417   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:05.019417   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:05 GMT
	I0419 18:59:05.019417   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:05.498802   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:05.498802   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:05.498802   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:05.498802   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:05.503727   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:05.504233   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:05.504233   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:05.504278   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:05.504278   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:05 GMT
	I0419 18:59:05.504278   14960 round_trippers.go:580]     Audit-Id: 4ba016da-dbb8-4f20-910f-365004ca45f8
	I0419 18:59:05.504278   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:05.504278   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:05.504420   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:05.505404   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:05.505442   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:05.505442   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:05.505442   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:05.511078   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:59:05.511078   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:05.511078   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:05.511078   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:05 GMT
	I0419 18:59:05.511078   14960 round_trippers.go:580]     Audit-Id: a4332a83-aa62-4b96-aef0-b62a80262f9c
	I0419 18:59:05.511078   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:05.511078   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:05.511078   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:05.512058   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:06.007697   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:06.007697   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:06.007697   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:06.007697   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:06.011287   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:06.011416   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:06.011478   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:06.011478   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:06.011478   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:06.011478   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:06 GMT
	I0419 18:59:06.011478   14960 round_trippers.go:580]     Audit-Id: 2115d770-5519-4034-bc3a-b8952ec7043a
	I0419 18:59:06.011478   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:06.011767   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:06.012396   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:06.012396   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:06.012523   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:06.012523   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:06.014786   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:06.014786   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:06.014786   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:06.014786   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:06 GMT
	I0419 18:59:06.014786   14960 round_trippers.go:580]     Audit-Id: d588e780-f10e-4fe6-a4bd-fc9d41dd1d91
	I0419 18:59:06.014786   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:06.014786   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:06.014786   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:06.015662   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:06.513465   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:06.513465   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:06.513571   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:06.513571   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:06.517226   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:06.517226   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:06.517296   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:06 GMT
	I0419 18:59:06.517296   14960 round_trippers.go:580]     Audit-Id: f8b2f3c4-a973-4dea-8b42-717975851e34
	I0419 18:59:06.517296   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:06.517296   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:06.517296   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:06.517296   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:06.517611   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:06.518382   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:06.518471   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:06.518471   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:06.518471   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:06.520951   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:06.520951   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:06.520951   14960 round_trippers.go:580]     Audit-Id: 302db986-dfa0-4b81-9f04-fcba2af125c2
	I0419 18:59:06.520951   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:06.521259   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:06.521259   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:06.521259   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:06.521259   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:06 GMT
	I0419 18:59:06.521885   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:06.522496   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:59:07.014055   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:07.014140   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.014140   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.014140   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.020420   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:59:07.020420   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.020420   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.020882   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.020882   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.020882   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.020882   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.020882   14960 round_trippers.go:580]     Audit-Id: f0709ebb-18c8-4915-a343-02786ccbfac4
	I0419 18:59:07.021124   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:07.021942   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:07.021999   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.021999   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.022058   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.033923   14960 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0419 18:59:07.033983   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.033983   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.033983   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.033983   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.033983   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.033983   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.033983   14960 round_trippers.go:580]     Audit-Id: 18eabf05-5209-425c-b1c1-8b00846a50c2
	I0419 18:59:07.034559   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:07.502757   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:07.502961   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.502961   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.503039   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.508018   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:07.508018   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.508018   14960 round_trippers.go:580]     Audit-Id: f5901ff3-df7c-45fa-9dec-750a43541171
	I0419 18:59:07.508018   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.508018   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.508018   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.508018   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.508018   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.508018   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1944","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6786 chars]
	I0419 18:59:07.509050   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:07.509129   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.509129   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.509129   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.511296   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:07.512289   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.512327   14960 round_trippers.go:580]     Audit-Id: 696e3ece-e5f2-482b-b3fa-b066333e9c70
	I0419 18:59:07.512327   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.512327   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.512327   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.512327   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.512327   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.512598   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:07.513028   14960 pod_ready.go:92] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"True"
	I0419 18:59:07.513028   14960 pod_ready.go:81] duration metric: took 27.0146735s for pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:07.513028   14960 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:07.513142   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-348000
	I0419 18:59:07.513142   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.513225   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.513225   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.518561   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:59:07.518657   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.518657   14960 round_trippers.go:580]     Audit-Id: 420728a6-e4d0-4d9a-a9bc-15f5b1b59d30
	I0419 18:59:07.518657   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.518657   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.518657   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.518657   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.518739   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.519500   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-348000","namespace":"kube-system","uid":"33702588-cdf3-4577-b18d-18415cca2c25","resourceVersion":"1836","creationTimestamp":"2024-04-20T01:58:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.42.24:2379","kubernetes.io/config.hash":"c0cfa3da6a3913c3e67500f6c3e9d72b","kubernetes.io/config.mirror":"c0cfa3da6a3913c3e67500f6c3e9d72b","kubernetes.io/config.seen":"2024-04-20T01:57:55.099346749Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:58:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6149 chars]
	I0419 18:59:07.519550   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:07.519550   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.519550   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.519550   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.522314   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:07.522314   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.522314   14960 round_trippers.go:580]     Audit-Id: 668d1d46-9b89-4c7a-a9be-d01ff8dd8d6d
	I0419 18:59:07.523331   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.523331   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.523331   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.523331   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.523331   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.523331   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:07.523331   14960 pod_ready.go:92] pod "etcd-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 18:59:07.524113   14960 pod_ready.go:81] duration metric: took 10.303ms for pod "etcd-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:07.524146   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:07.524146   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-348000
	I0419 18:59:07.524146   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.524146   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.524146   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.526631   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:07.526631   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.526631   14960 round_trippers.go:580]     Audit-Id: 8583e14e-6dea-4103-800e-098537e0117a
	I0419 18:59:07.526631   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.526631   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.526631   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.526631   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.526631   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.527729   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-348000","namespace":"kube-system","uid":"13adbf1b-6c17-47a9-951d-2481680a47bd","resourceVersion":"1823","creationTimestamp":"2024-04-20T01:58:01Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.42.24:8443","kubernetes.io/config.hash":"af7a3c9321ace7e2a933260472b90113","kubernetes.io/config.mirror":"af7a3c9321ace7e2a933260472b90113","kubernetes.io/config.seen":"2024-04-20T01:57:55.026086199Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:58:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7685 chars]
	I0419 18:59:07.528175   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:07.528175   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.528175   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.528175   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.530806   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:07.530806   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.530806   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.530806   14960 round_trippers.go:580]     Audit-Id: bb69a5d2-e9e3-4b6c-969a-63c6433f4821
	I0419 18:59:07.530806   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.530806   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.530806   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.530806   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.530806   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:07.530806   14960 pod_ready.go:92] pod "kube-apiserver-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 18:59:07.530806   14960 pod_ready.go:81] duration metric: took 6.6602ms for pod "kube-apiserver-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:07.530806   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:07.532201   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-348000
	I0419 18:59:07.532201   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.532201   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.532332   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.535080   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:07.535080   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.535080   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.536048   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.536048   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.536048   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.536048   14960 round_trippers.go:580]     Audit-Id: 38701ef6-d4e6-4688-8eab-6aaad79aa8e5
	I0419 18:59:07.536048   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.536419   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-348000","namespace":"kube-system","uid":"299bb088-9795-4452-87a8-5e96bcacedde","resourceVersion":"1829","creationTimestamp":"2024-04-20T01:35:08Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"30aa2729d0c65b9f89e1ae2d151edd9b","kubernetes.io/config.mirror":"30aa2729d0c65b9f89e1ae2d151edd9b","kubernetes.io/config.seen":"2024-04-20T01:35:08.321898260Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0419 18:59:07.537180   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:07.537180   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.537231   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.537231   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.539482   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:07.539482   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.539482   14960 round_trippers.go:580]     Audit-Id: 179cb76c-c5c9-4176-a360-e036f1c8f798
	I0419 18:59:07.539482   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.539482   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.539482   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.539482   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.539482   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.539482   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:07.539482   14960 pod_ready.go:92] pod "kube-controller-manager-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 18:59:07.539482   14960 pod_ready.go:81] duration metric: took 7.2809ms for pod "kube-controller-manager-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:07.539482   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2jjsq" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:07.539482   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2jjsq
	I0419 18:59:07.539482   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.540493   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.540535   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.542270   14960 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 18:59:07.542270   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.542270   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.542270   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.542270   14960 round_trippers.go:580]     Audit-Id: 9c19064f-4110-482a-9b33-bdb23bb21ff0
	I0419 18:59:07.542270   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.542270   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.543246   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.544226   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2jjsq","generateName":"kube-proxy-","namespace":"kube-system","uid":"f9666ab7-0d1f-4800-b979-6e38fecdc518","resourceVersion":"1708","creationTimestamp":"2024-04-20T01:42:52Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:42:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0419 18:59:07.544899   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m03
	I0419 18:59:07.544978   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.544978   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.545059   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.546925   14960 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 18:59:07.546925   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.546925   14960 round_trippers.go:580]     Audit-Id: 2f6646c5-bdcd-4060-b3dc-3f276a83411d
	I0419 18:59:07.546925   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.546925   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.546925   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.547947   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.547947   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.548092   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m03","uid":"08bfca2d-b382-4052-a5b6-0a78bee7caef","resourceVersion":"1871","creationTimestamp":"2024-04-20T01:53:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_53_29_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:53:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4398 chars]
	I0419 18:59:07.548536   14960 pod_ready.go:97] node "multinode-348000-m03" hosting pod "kube-proxy-2jjsq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000-m03" has status "Ready":"Unknown"
	I0419 18:59:07.548536   14960 pod_ready.go:81] duration metric: took 9.0538ms for pod "kube-proxy-2jjsq" in "kube-system" namespace to be "Ready" ...
	E0419 18:59:07.548536   14960 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348000-m03" hosting pod "kube-proxy-2jjsq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000-m03" has status "Ready":"Unknown"
	I0419 18:59:07.548536   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bjv9b" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:07.705114   14960 request.go:629] Waited for 156.4717ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bjv9b
	I0419 18:59:07.705326   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bjv9b
	I0419 18:59:07.705391   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.705391   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.705430   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.709801   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:07.709801   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.709801   14960 round_trippers.go:580]     Audit-Id: f15fc53e-6021-4d4f-ba7b-a7acaae73a3a
	I0419 18:59:07.709801   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.709801   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.710149   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.710149   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.710149   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.710832   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bjv9b","generateName":"kube-proxy-","namespace":"kube-system","uid":"3e909d14-543a-4734-8c17-7e2b8188553d","resourceVersion":"1918","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0419 18:59:07.908329   14960 request.go:629] Waited for 196.2638ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:59:07.908646   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:59:07.908646   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.908646   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.908646   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.913701   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:07.913789   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.913789   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.913789   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.913789   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.913877   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.913877   14960 round_trippers.go:580]     Audit-Id: eec667c0-5f4b-4396-b538-1a02bb301448
	I0419 18:59:07.913877   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.913877   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"1930","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4582 chars]
	I0419 18:59:07.914762   14960 pod_ready.go:97] node "multinode-348000-m02" hosting pod "kube-proxy-bjv9b" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000-m02" has status "Ready":"Unknown"
	I0419 18:59:07.914762   14960 pod_ready.go:81] duration metric: took 366.1192ms for pod "kube-proxy-bjv9b" in "kube-system" namespace to be "Ready" ...
	E0419 18:59:07.914762   14960 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348000-m02" hosting pod "kube-proxy-bjv9b" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000-m02" has status "Ready":"Unknown"
	I0419 18:59:07.914762   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kj76x" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:08.113272   14960 request.go:629] Waited for 198.1954ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kj76x
	I0419 18:59:08.113485   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kj76x
	I0419 18:59:08.113485   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:08.113485   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:08.113485   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:08.118071   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:08.118071   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:08.118071   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:08.118071   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:08.118071   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:08 GMT
	I0419 18:59:08.118071   14960 round_trippers.go:580]     Audit-Id: d640072f-850f-4e7a-b610-f17bcf62a58d
	I0419 18:59:08.118071   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:08.118071   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:08.118762   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kj76x","generateName":"kube-proxy-","namespace":"kube-system","uid":"274342c4-c21f-4279-b0ea-743d8e2c1463","resourceVersion":"1750","creationTimestamp":"2024-04-20T01:35:22Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0419 18:59:08.303557   14960 request.go:629] Waited for 184.3049ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:08.303669   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:08.303723   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:08.303723   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:08.303723   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:08.307071   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:08.307071   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:08.307071   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:08 GMT
	I0419 18:59:08.307071   14960 round_trippers.go:580]     Audit-Id: c3a8878b-de3c-448e-80a5-8f98e8f88f18
	I0419 18:59:08.307071   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:08.307071   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:08.307071   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:08.307071   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:08.307071   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:08.307071   14960 pod_ready.go:92] pod "kube-proxy-kj76x" in "kube-system" namespace has status "Ready":"True"
	I0419 18:59:08.307071   14960 pod_ready.go:81] duration metric: took 392.3086ms for pod "kube-proxy-kj76x" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:08.307071   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:08.509840   14960 request.go:629] Waited for 202.6854ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348000
	I0419 18:59:08.509840   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348000
	I0419 18:59:08.509840   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:08.509840   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:08.509840   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:08.515634   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:59:08.515634   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:08.515634   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:08.515634   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:08 GMT
	I0419 18:59:08.515634   14960 round_trippers.go:580]     Audit-Id: 0cedc28b-6be5-4d75-a299-e4297f58ea50
	I0419 18:59:08.515634   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:08.515634   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:08.515891   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:08.516129   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-348000","namespace":"kube-system","uid":"000cfafe-a513-4738-9de2-3c25244b72be","resourceVersion":"1824","creationTimestamp":"2024-04-20T01:35:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"92813b2aed63b63058d3fd06709fa24e","kubernetes.io/config.mirror":"92813b2aed63b63058d3fd06709fa24e","kubernetes.io/config.seen":"2024-04-20T01:35:08.321899460Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0419 18:59:08.712798   14960 request.go:629] Waited for 195.3539ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:08.712798   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:08.712798   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:08.712798   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:08.712798   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:08.716425   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:08.717222   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:08.717222   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:08.717222   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:08.717222   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:08.717222   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:08 GMT
	I0419 18:59:08.717222   14960 round_trippers.go:580]     Audit-Id: 052f1365-af7d-4a4c-87bb-d2c6961f5fb4
	I0419 18:59:08.717222   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:08.717222   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:08.718327   14960 pod_ready.go:92] pod "kube-scheduler-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 18:59:08.718327   14960 pod_ready.go:81] duration metric: took 411.2544ms for pod "kube-scheduler-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:08.718327   14960 pod_ready.go:38] duration metric: took 28.2332658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 18:59:08.718327   14960 api_server.go:52] waiting for apiserver process to appear ...
	I0419 18:59:08.729751   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 18:59:08.754094   14960 command_runner.go:130] > bd3aa93bac25
	I0419 18:59:08.754215   14960 logs.go:276] 1 containers: [bd3aa93bac25]
	I0419 18:59:08.764137   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 18:59:08.785717   14960 command_runner.go:130] > 2deabe4dbdf4
	I0419 18:59:08.785790   14960 logs.go:276] 1 containers: [2deabe4dbdf4]
	I0419 18:59:08.796593   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 18:59:08.827474   14960 command_runner.go:130] > 352cf21a3e20
	I0419 18:59:08.828457   14960 command_runner.go:130] > 627b84abf45c
	I0419 18:59:08.828457   14960 logs.go:276] 2 containers: [352cf21a3e20 627b84abf45c]
	I0419 18:59:08.838185   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 18:59:08.862005   14960 command_runner.go:130] > d57aee391c14
	I0419 18:59:08.862005   14960 command_runner.go:130] > e476774b8f77
	I0419 18:59:08.863002   14960 logs.go:276] 2 containers: [d57aee391c14 e476774b8f77]
	I0419 18:59:08.872905   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 18:59:08.893884   14960 command_runner.go:130] > e438af0f1ec9
	I0419 18:59:08.893884   14960 command_runner.go:130] > a6586791413d
	I0419 18:59:08.894266   14960 logs.go:276] 2 containers: [e438af0f1ec9 a6586791413d]
	I0419 18:59:08.904440   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 18:59:08.931190   14960 command_runner.go:130] > b67f2295d26c
	I0419 18:59:08.932028   14960 command_runner.go:130] > 9638ddcd5428
	I0419 18:59:08.932028   14960 logs.go:276] 2 containers: [b67f2295d26c 9638ddcd5428]
	I0419 18:59:08.943113   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 18:59:08.966177   14960 command_runner.go:130] > ae0b21715f86
	I0419 18:59:08.966177   14960 command_runner.go:130] > f8c798c99407
	I0419 18:59:08.966877   14960 logs.go:276] 2 containers: [ae0b21715f86 f8c798c99407]
	I0419 18:59:08.966877   14960 logs.go:123] Gathering logs for dmesg ...
	I0419 18:59:08.966877   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 18:59:08.996433   14960 command_runner.go:130] > [Apr20 01:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0419 18:59:08.996848   14960 command_runner.go:130] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0419 18:59:08.996848   14960 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0419 18:59:08.996848   14960 command_runner.go:130] > [  +0.134823] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0419 18:59:08.996965   14960 command_runner.go:130] > [  +0.023006] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0419 18:59:08.996965   14960 command_runner.go:130] > [  +0.000006] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0419 18:59:08.996965   14960 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0419 18:59:08.996965   14960 command_runner.go:130] > [  +0.065433] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0419 18:59:08.997080   14960 command_runner.go:130] > [  +0.022829] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0419 18:59:08.997080   14960 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0419 18:59:08.997080   14960 command_runner.go:130] > [  +5.461945] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0419 18:59:08.997142   14960 command_runner.go:130] > [  +0.733998] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0419 18:59:08.997142   14960 command_runner.go:130] > [  +1.817887] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0419 18:59:08.997142   14960 command_runner.go:130] > [  +7.031305] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0419 18:59:08.997142   14960 command_runner.go:130] > [  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0419 18:59:08.997142   14960 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0419 18:59:08.997142   14960 command_runner.go:130] > [Apr20 01:57] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	I0419 18:59:08.997142   14960 command_runner.go:130] > [  +0.209815] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	I0419 18:59:08.997239   14960 command_runner.go:130] > [ +26.622359] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0419 18:59:08.997239   14960 command_runner.go:130] > [  +0.115734] kauditd_printk_skb: 73 callbacks suppressed
	I0419 18:59:08.997239   14960 command_runner.go:130] > [  +0.605928] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	I0419 18:59:08.997239   14960 command_runner.go:130] > [  +0.209234] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	I0419 18:59:08.997239   14960 command_runner.go:130] > [  +0.243987] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	I0419 18:59:08.997239   14960 command_runner.go:130] > [  +2.954231] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0419 18:59:08.997239   14960 command_runner.go:130] > [  +0.209781] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	I0419 18:59:08.997239   14960 command_runner.go:130] > [  +0.225214] systemd-fstab-generator[1255]: Ignoring "noauto" option for root device
	I0419 18:59:08.997239   14960 command_runner.go:130] > [  +0.313735] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	I0419 18:59:08.997239   14960 command_runner.go:130] > [  +0.929646] systemd-fstab-generator[1383]: Ignoring "noauto" option for root device
	I0419 18:59:08.997239   14960 command_runner.go:130] > [  +0.108494] kauditd_printk_skb: 205 callbacks suppressed
	I0419 18:59:08.997400   14960 command_runner.go:130] > [  +3.650728] systemd-fstab-generator[1520]: Ignoring "noauto" option for root device
	I0419 18:59:08.997400   14960 command_runner.go:130] > [  +1.371725] kauditd_printk_skb: 49 callbacks suppressed
	I0419 18:59:08.997400   14960 command_runner.go:130] > [Apr20 01:58] kauditd_printk_skb: 25 callbacks suppressed
	I0419 18:59:08.997400   14960 command_runner.go:130] > [  +3.878920] systemd-fstab-generator[2324]: Ignoring "noauto" option for root device
	I0419 18:59:08.997400   14960 command_runner.go:130] > [  +7.552702] kauditd_printk_skb: 70 callbacks suppressed
	I0419 18:59:08.999638   14960 logs.go:123] Gathering logs for coredns [352cf21a3e20] ...
	I0419 18:59:08.999704   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 352cf21a3e20"
	I0419 18:59:09.041411   14960 command_runner.go:130] > .:53
	I0419 18:59:09.041570   14960 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93714cfd58e203ac2baa48ea9c7b435951d2a9faed7a5c70b4e84c89c6c1fe4c1dfa41f14b3ebf0f5941dade673a82eaad960061e673dd78dcb856db3393b39d
	I0419 18:59:09.041570   14960 command_runner.go:130] > CoreDNS-1.11.1
	I0419 18:59:09.041570   14960 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0419 18:59:09.041570   14960 command_runner.go:130] > [INFO] 127.0.0.1:51206 - 14298 "HINFO IN 4972057462503628469.2167329557243878603. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028297062s
	I0419 18:59:09.042033   14960 logs.go:123] Gathering logs for kube-scheduler [d57aee391c14] ...
	I0419 18:59:09.042033   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57aee391c14"
	I0419 18:59:09.074641   14960 command_runner.go:130] ! I0420 01:57:58.020728       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:09.074740   14960 command_runner.go:130] ! I0420 01:58:00.771749       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0419 18:59:09.074927   14960 command_runner.go:130] ! I0420 01:58:00.771906       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:09.075024   14960 command_runner.go:130] ! I0420 01:58:00.785599       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0419 18:59:09.075154   14960 command_runner.go:130] ! I0420 01:58:00.785824       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0419 18:59:09.075154   14960 command_runner.go:130] ! I0420 01:58:00.785929       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 18:59:09.075154   14960 command_runner.go:130] ! I0420 01:58:00.785956       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:09.075154   14960 command_runner.go:130] ! I0420 01:58:00.785972       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0419 18:59:09.075154   14960 command_runner.go:130] ! I0420 01:58:00.786046       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0419 18:59:09.075154   14960 command_runner.go:130] ! I0420 01:58:00.786323       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0419 18:59:09.075154   14960 command_runner.go:130] ! I0420 01:58:00.786915       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:09.075154   14960 command_runner.go:130] ! I0420 01:58:00.887091       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0419 18:59:09.076050   14960 command_runner.go:130] ! I0420 01:58:00.887476       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:09.076050   14960 command_runner.go:130] ! I0420 01:58:00.888293       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0419 18:59:09.079596   14960 logs.go:123] Gathering logs for kube-proxy [e438af0f1ec9] ...
	I0419 18:59:09.079629   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e438af0f1ec9"
	I0419 18:59:09.105026   14960 command_runner.go:130] ! I0420 01:58:03.129201       1 server_linux.go:69] "Using iptables proxy"
	I0419 18:59:09.105026   14960 command_runner.go:130] ! I0420 01:58:03.201631       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.42.24"]
	I0419 18:59:09.105026   14960 command_runner.go:130] ! I0420 01:58:03.344058       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 18:59:09.105026   14960 command_runner.go:130] ! I0420 01:58:03.344107       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 18:59:09.105026   14960 command_runner.go:130] ! I0420 01:58:03.344137       1 server_linux.go:165] "Using iptables Proxier"
	I0419 18:59:09.105881   14960 command_runner.go:130] ! I0420 01:58:03.353394       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 18:59:09.105924   14960 command_runner.go:130] ! I0420 01:58:03.354462       1 server.go:872] "Version info" version="v1.30.0"
	I0419 18:59:09.105924   14960 command_runner.go:130] ! I0420 01:58:03.354693       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:09.105924   14960 command_runner.go:130] ! I0420 01:58:03.358325       1 config.go:192] "Starting service config controller"
	I0419 18:59:09.105924   14960 command_runner.go:130] ! I0420 01:58:03.358366       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 18:59:09.105924   14960 command_runner.go:130] ! I0420 01:58:03.358985       1 config.go:101] "Starting endpoint slice config controller"
	I0419 18:59:09.105992   14960 command_runner.go:130] ! I0420 01:58:03.359176       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 18:59:09.105992   14960 command_runner.go:130] ! I0420 01:58:03.358997       1 config.go:319] "Starting node config controller"
	I0419 18:59:09.106046   14960 command_runner.go:130] ! I0420 01:58:03.368409       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 18:59:09.106085   14960 command_runner.go:130] ! I0420 01:58:03.459372       1 shared_informer.go:320] Caches are synced for service config
	I0419 18:59:09.106085   14960 command_runner.go:130] ! I0420 01:58:03.459745       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 18:59:09.106085   14960 command_runner.go:130] ! I0420 01:58:03.470538       1 shared_informer.go:320] Caches are synced for node config
	I0419 18:59:09.108043   14960 logs.go:123] Gathering logs for kube-controller-manager [b67f2295d26c] ...
	I0419 18:59:09.108043   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67f2295d26c"
	I0419 18:59:09.137861   14960 command_runner.go:130] ! I0420 01:57:58.124915       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:09.138926   14960 command_runner.go:130] ! I0420 01:57:58.572589       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0419 18:59:09.139991   14960 command_runner.go:130] ! I0420 01:57:58.572759       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:09.140029   14960 command_runner.go:130] ! I0420 01:57:58.576545       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:09.140081   14960 command_runner.go:130] ! I0420 01:57:58.576765       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:09.140081   14960 command_runner.go:130] ! I0420 01:57:58.577138       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0419 18:59:09.140081   14960 command_runner.go:130] ! I0420 01:57:58.577308       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:09.140081   14960 command_runner.go:130] ! I0420 01:58:02.671844       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0419 18:59:09.140138   14960 command_runner.go:130] ! I0420 01:58:02.672396       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0419 18:59:09.140138   14960 command_runner.go:130] ! I0420 01:58:02.683222       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0419 18:59:09.140202   14960 command_runner.go:130] ! I0420 01:58:02.683502       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0419 18:59:09.140202   14960 command_runner.go:130] ! I0420 01:58:02.683748       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0419 18:59:09.140202   14960 command_runner.go:130] ! I0420 01:58:02.684992       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0419 18:59:09.140202   14960 command_runner.go:130] ! I0420 01:58:02.685159       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0419 18:59:09.140268   14960 command_runner.go:130] ! I0420 01:58:02.689572       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0419 18:59:09.140268   14960 command_runner.go:130] ! I0420 01:58:02.693653       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0419 18:59:09.140268   14960 command_runner.go:130] ! I0420 01:58:02.694118       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0419 18:59:09.140336   14960 command_runner.go:130] ! I0420 01:58:02.694295       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0419 18:59:09.140336   14960 command_runner.go:130] ! I0420 01:58:02.695565       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0419 18:59:09.140336   14960 command_runner.go:130] ! I0420 01:58:02.695757       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0419 18:59:09.140426   14960 command_runner.go:130] ! I0420 01:58:02.700089       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0419 18:59:09.140426   14960 command_runner.go:130] ! I0420 01:58:02.700328       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0419 18:59:09.140461   14960 command_runner.go:130] ! I0420 01:58:02.700370       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0419 18:59:09.140461   14960 command_runner.go:130] ! I0420 01:58:02.708704       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0419 18:59:09.140499   14960 command_runner.go:130] ! I0420 01:58:02.712057       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.712325       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.712551       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0419 18:59:09.140531   14960 command_runner.go:130] ! E0420 01:58:02.728628       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.728672       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! E0420 01:58:02.742147       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.742194       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.742206       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.748098       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.748399       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.748420       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.752218       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.752332       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.752344       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.765569       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.765610       1 shared_informer.go:313] Waiting for caches to sync for job
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.765645       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.772658       1 shared_informer.go:320] Caches are synced for tokens
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.773270       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.773287       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.786700       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.788042       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.799412       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.804126       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.804238       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.814226       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.818062       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.818127       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.868296       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.868361       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.868379       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.870217       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873404       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873440       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! W0420 01:58:02.873461       1 shared_informer.go:597] resyncPeriod 18h17m32.022460908s is smaller than resyncCheckPeriod 19h9m29.930546571s and the informer has already started. Changing it to 19h9m29.930546571s
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873587       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873612       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873690       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873722       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873768       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873784       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873852       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873883       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873963       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873989       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.874019       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.874045       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.874084       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.874104       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.874180       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.874255       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.874269       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.874289       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.910217       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.910746       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.912220       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.928174       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.928508       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.928473       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.929874       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.931641       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.931894       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.932890       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.934333       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.934546       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.934881       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.939106       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.939460       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.968845       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.968916       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.969733       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.969944       1 shared_informer.go:313] Waiting for caches to sync for node
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.975888       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.977148       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.977216       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.978712       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.979007       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.979040       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.982094       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.982639       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.982957       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.032307       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.032749       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.035306       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.036848       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.037653       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.038965       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.039366       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.039352       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.040679       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.040782       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.040908       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.041738       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.041781       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.042295       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.041839       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.042314       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.041850       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:09.143989   14960 command_runner.go:130] ! I0420 01:58:13.042715       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:09.143989   14960 command_runner.go:130] ! I0420 01:58:13.046953       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0419 18:59:09.143989   14960 command_runner.go:130] ! I0420 01:58:13.047617       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0419 18:59:09.143989   14960 command_runner.go:130] ! I0420 01:58:13.047660       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0419 18:59:09.144101   14960 command_runner.go:130] ! I0420 01:58:13.047670       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0419 18:59:09.144101   14960 command_runner.go:130] ! I0420 01:58:13.050144       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0419 18:59:09.144101   14960 command_runner.go:130] ! I0420 01:58:13.050286       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0419 18:59:09.144101   14960 command_runner.go:130] ! I0420 01:58:13.050982       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0419 18:59:09.144173   14960 command_runner.go:130] ! I0420 01:58:13.051033       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0419 18:59:09.144173   14960 command_runner.go:130] ! I0420 01:58:13.051061       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0419 18:59:09.144230   14960 command_runner.go:130] ! I0420 01:58:13.054294       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0419 18:59:09.144273   14960 command_runner.go:130] ! I0420 01:58:13.054709       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0419 18:59:09.144273   14960 command_runner.go:130] ! I0420 01:58:13.054987       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0419 18:59:09.144273   14960 command_runner.go:130] ! I0420 01:58:13.057961       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0419 18:59:09.144273   14960 command_runner.go:130] ! I0420 01:58:13.058399       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0419 18:59:09.144338   14960 command_runner.go:130] ! I0420 01:58:13.058606       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0419 18:59:09.144338   14960 command_runner.go:130] ! I0420 01:58:13.060766       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.061307       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.060852       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.061691       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.064061       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.064698       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.065134       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.067945       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.068315       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.068613       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.077312       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.077939       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.078050       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.078623       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.083275       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.083591       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.083702       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.090751       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.091149       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.091393       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.091591       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.096868       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.097085       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.100720       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.101287       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.101375       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.103459       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.106949       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.107026       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.116002       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.139685       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.148344       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.152489       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.140934       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.151083       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000\" does not exist"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.141105       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0419 18:59:09.144971   14960 command_runner.go:130] ! I0420 01:58:13.156086       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:09.144971   14960 command_runner.go:130] ! I0420 01:58:13.156676       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m02\" does not exist"
	I0419 18:59:09.145018   14960 command_runner.go:130] ! I0420 01:58:13.156750       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m03\" does not exist"
	I0419 18:59:09.145018   14960 command_runner.go:130] ! I0420 01:58:13.156865       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.145018   14960 command_runner.go:130] ! I0420 01:58:13.142425       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0419 18:59:09.145111   14960 command_runner.go:130] ! I0420 01:58:13.157020       1 shared_informer.go:320] Caches are synced for expand
	I0419 18:59:09.145111   14960 command_runner.go:130] ! I0420 01:58:13.159992       1 shared_informer.go:320] Caches are synced for ephemeral
	I0419 18:59:09.145144   14960 command_runner.go:130] ! I0420 01:58:13.145957       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:09.145191   14960 command_runner.go:130] ! I0420 01:58:13.162320       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.165325       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.165759       1 shared_informer.go:320] Caches are synced for job
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.169537       1 shared_informer.go:320] Caches are synced for service account
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.171293       1 shared_informer.go:320] Caches are synced for node
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.178178       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.178222       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.178230       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.178237       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.178270       1 shared_informer.go:320] Caches are synced for attach detach
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.179699       1 shared_informer.go:320] Caches are synced for PV protection
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.183856       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.183905       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.188521       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.195859       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.200417       1 shared_informer.go:320] Caches are synced for crt configmap
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.201881       1 shared_informer.go:320] Caches are synced for persistent volume
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.204647       1 shared_informer.go:320] Caches are synced for endpoint
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.207356       1 shared_informer.go:320] Caches are synced for PVC protection
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.213532       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.214173       1 shared_informer.go:320] Caches are synced for namespace
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.219105       1 shared_informer.go:320] Caches are synced for GC
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.228919       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.535929ms"
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.230155       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.901µs"
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.230170       1 shared_informer.go:320] Caches are synced for HPA
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.234086       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.236046       1 shared_informer.go:320] Caches are synced for TTL
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.240266       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.682408ms"
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.240992       1 shared_informer.go:320] Caches are synced for deployment
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.243741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="114.104µs"
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.248776       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.252859       1 shared_informer.go:320] Caches are synced for daemon sets
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.253008       1 shared_informer.go:320] Caches are synced for taint
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.259997       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.297486       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000"
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.297542       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m02"
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.297627       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m03"
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.297865       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0419 18:59:09.145801   14960 command_runner.go:130] ! I0420 01:58:13.335459       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0419 18:59:09.145801   14960 command_runner.go:130] ! I0420 01:58:13.374436       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:09.145801   14960 command_runner.go:130] ! I0420 01:58:13.389294       1 shared_informer.go:320] Caches are synced for cronjob
	I0419 18:59:09.145848   14960 command_runner.go:130] ! I0420 01:58:13.392315       1 shared_informer.go:320] Caches are synced for disruption
	I0419 18:59:09.145848   14960 command_runner.go:130] ! I0420 01:58:13.397172       1 shared_informer.go:320] Caches are synced for stateful set
	I0419 18:59:09.145848   14960 command_runner.go:130] ! I0420 01:58:13.416186       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:09.145848   14960 command_runner.go:130] ! I0420 01:58:13.857437       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:09.145848   14960 command_runner.go:130] ! I0420 01:58:13.878325       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:09.145848   14960 command_runner.go:130] ! I0420 01:58:13.878534       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0419 18:59:09.145848   14960 command_runner.go:130] ! I0420 01:58:40.290168       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.145971   14960 command_runner.go:130] ! I0420 01:58:53.395955       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.694507ms"
	I0419 18:59:09.145971   14960 command_runner.go:130] ! I0420 01:58:53.396146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.003µs"
	I0419 18:59:09.146005   14960 command_runner.go:130] ! I0420 01:59:07.033370       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.713655ms"
	I0419 18:59:09.146005   14960 command_runner.go:130] ! I0420 01:59:07.033533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.092µs"
	I0419 18:59:09.146058   14960 command_runner.go:130] ! I0420 01:59:07.047220       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.391µs"
	I0419 18:59:09.146058   14960 command_runner.go:130] ! I0420 01:59:07.121391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.338984ms"
	I0419 18:59:09.146100   14960 command_runner.go:130] ! I0420 01:59:07.121503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.691µs"
	I0419 18:59:09.162742   14960 logs.go:123] Gathering logs for container status ...
	I0419 18:59:09.162742   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 18:59:09.238662   14960 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0419 18:59:09.238662   14960 command_runner.go:130] > d608b74b0597f       8c811b4aec35f                                                                                         4 seconds ago        Running             busybox                   1                   75ff9f4e9dde2       busybox-fc5497c4f-xnz2k
	I0419 18:59:09.238662   14960 command_runner.go:130] > 352cf21a3e202       cbb01a7bd410d                                                                                         4 seconds ago        Running             coredns                   1                   f28a1e746a9b4       coredns-7db6d8ff4d-7w477
	I0419 18:59:09.238662   14960 command_runner.go:130] > c6f350bee7762       6e38f40d628db                                                                                         24 seconds ago       Running             storage-provisioner       2                   5472c1fba3929       storage-provisioner
	I0419 18:59:09.238662   14960 command_runner.go:130] > ae0b21715f861       4950bb10b3f87                                                                                         33 seconds ago       Running             kindnet-cni               2                   b5a777eba295e       kindnet-s4fsr
	I0419 18:59:09.238662   14960 command_runner.go:130] > f8c798c994078       4950bb10b3f87                                                                                         About a minute ago   Exited              kindnet-cni               1                   b5a777eba295e       kindnet-s4fsr
	I0419 18:59:09.238662   14960 command_runner.go:130] > 45383c4290ad1       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   5472c1fba3929       storage-provisioner
	I0419 18:59:09.238662   14960 command_runner.go:130] > e438af0f1ec9e       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   09f65a6953038       kube-proxy-kj76x
	I0419 18:59:09.238662   14960 command_runner.go:130] > 2deabe4dbdf41       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   ab9ff1d906880       etcd-multinode-348000
	I0419 18:59:09.238662   14960 command_runner.go:130] > bd3aa93bac25b       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   d7052a6f04def       kube-apiserver-multinode-348000
	I0419 18:59:09.238662   14960 command_runner.go:130] > b67f2295d26ca       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   118cca57d1f54       kube-controller-manager-multinode-348000
	I0419 18:59:09.239687   14960 command_runner.go:130] > d57aee391c146       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   e8baa597c1467       kube-scheduler-multinode-348000
	I0419 18:59:09.239687   14960 command_runner.go:130] > d8afb3e1fb946       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   476e3efb38684       busybox-fc5497c4f-xnz2k
	I0419 18:59:09.239740   14960 command_runner.go:130] > 627b84abf45cd       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   2dd294415aae1       coredns-7db6d8ff4d-7w477
	I0419 18:59:09.239740   14960 command_runner.go:130] > a6586791413d0       a0bf559e280cf                                                                                         23 minutes ago       Exited              kube-proxy                0                   7935893e9f22a       kube-proxy-kj76x
	I0419 18:59:09.239807   14960 command_runner.go:130] > 9638ddcd54285       c7aad43836fa5                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   6e420625b84be       kube-controller-manager-multinode-348000
	I0419 18:59:09.239871   14960 command_runner.go:130] > e476774b8f77e       259c8277fcbbc                                                                                         24 minutes ago       Exited              kube-scheduler            0                   e5d733991bf1a       kube-scheduler-multinode-348000
	I0419 18:59:09.242765   14960 logs.go:123] Gathering logs for coredns [627b84abf45c] ...
	I0419 18:59:09.242815   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627b84abf45c"
	I0419 18:59:09.283621   14960 command_runner.go:130] > .:53
	I0419 18:59:09.283621   14960 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93714cfd58e203ac2baa48ea9c7b435951d2a9faed7a5c70b4e84c89c6c1fe4c1dfa41f14b3ebf0f5941dade673a82eaad960061e673dd78dcb856db3393b39d
	I0419 18:59:09.283621   14960 command_runner.go:130] > CoreDNS-1.11.1
	I0419 18:59:09.283621   14960 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0419 18:59:09.283800   14960 command_runner.go:130] > [INFO] 127.0.0.1:37904 - 37003 "HINFO IN 1336380353163369387.5260466772500757990. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.053891439s
	I0419 18:59:09.283800   14960 command_runner.go:130] > [INFO] 10.244.1.2:47846 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002913s
	I0419 18:59:09.283800   14960 command_runner.go:130] > [INFO] 10.244.1.2:60728 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.118385602s
	I0419 18:59:09.283800   14960 command_runner.go:130] > [INFO] 10.244.1.2:48827 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.043741711s
	I0419 18:59:09.283800   14960 command_runner.go:130] > [INFO] 10.244.1.2:57126 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.111854404s
	I0419 18:59:09.283893   14960 command_runner.go:130] > [INFO] 10.244.0.3:44468 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001971s
	I0419 18:59:09.283893   14960 command_runner.go:130] > [INFO] 10.244.0.3:58477 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.002287005s
	I0419 18:59:09.283893   14960 command_runner.go:130] > [INFO] 10.244.0.3:39825 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000198301s
	I0419 18:59:09.283893   14960 command_runner.go:130] > [INFO] 10.244.0.3:54956 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000604s
	I0419 18:59:09.283893   14960 command_runner.go:130] > [INFO] 10.244.1.2:48593 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001261s
	I0419 18:59:09.283893   14960 command_runner.go:130] > [INFO] 10.244.1.2:58743 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.027871268s
	I0419 18:59:09.283979   14960 command_runner.go:130] > [INFO] 10.244.1.2:44517 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002274s
	I0419 18:59:09.283979   14960 command_runner.go:130] > [INFO] 10.244.1.2:35998 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000219501s
	I0419 18:59:09.283979   14960 command_runner.go:130] > [INFO] 10.244.1.2:58770 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012982932s
	I0419 18:59:09.283979   14960 command_runner.go:130] > [INFO] 10.244.1.2:55456 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174201s
	I0419 18:59:09.284062   14960 command_runner.go:130] > [INFO] 10.244.1.2:59031 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001304s
	I0419 18:59:09.284062   14960 command_runner.go:130] > [INFO] 10.244.1.2:41687 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000198401s
	I0419 18:59:09.284062   14960 command_runner.go:130] > [INFO] 10.244.0.3:46929 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003044s
	I0419 18:59:09.284062   14960 command_runner.go:130] > [INFO] 10.244.0.3:35877 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000325701s
	I0419 18:59:09.284138   14960 command_runner.go:130] > [INFO] 10.244.0.3:53705 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000318601s
	I0419 18:59:09.284138   14960 command_runner.go:130] > [INFO] 10.244.0.3:40560 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164401s
	I0419 18:59:09.284138   14960 command_runner.go:130] > [INFO] 10.244.0.3:53239 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001239s
	I0419 18:59:09.284138   14960 command_runner.go:130] > [INFO] 10.244.0.3:39754 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001464s
	I0419 18:59:09.284138   14960 command_runner.go:130] > [INFO] 10.244.0.3:41397 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001668s
	I0419 18:59:09.284255   14960 command_runner.go:130] > [INFO] 10.244.0.3:49126 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001646s
	I0419 18:59:09.284255   14960 command_runner.go:130] > [INFO] 10.244.1.2:37850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115501s
	I0419 18:59:09.284255   14960 command_runner.go:130] > [INFO] 10.244.1.2:44063 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001443s
	I0419 18:59:09.284255   14960 command_runner.go:130] > [INFO] 10.244.1.2:39924 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000607s
	I0419 18:59:09.284255   14960 command_runner.go:130] > [INFO] 10.244.1.2:53244 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000622s
	I0419 18:59:09.284331   14960 command_runner.go:130] > [INFO] 10.244.0.3:52017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001879s
	I0419 18:59:09.284331   14960 command_runner.go:130] > [INFO] 10.244.0.3:55488 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000814s
	I0419 18:59:09.284331   14960 command_runner.go:130] > [INFO] 10.244.0.3:57536 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000778s
	I0419 18:59:09.284405   14960 command_runner.go:130] > [INFO] 10.244.0.3:45454 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001788s
	I0419 18:59:09.284405   14960 command_runner.go:130] > [INFO] 10.244.1.2:52247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001095s
	I0419 18:59:09.284405   14960 command_runner.go:130] > [INFO] 10.244.1.2:46954 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001143s
	I0419 18:59:09.284405   14960 command_runner.go:130] > [INFO] 10.244.1.2:47574 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098701s
	I0419 18:59:09.284477   14960 command_runner.go:130] > [INFO] 10.244.1.2:36658 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000170301s
	I0419 18:59:09.284477   14960 command_runner.go:130] > [INFO] 10.244.0.3:35421 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001002s
	I0419 18:59:09.284477   14960 command_runner.go:130] > [INFO] 10.244.0.3:41995 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132201s
	I0419 18:59:09.284477   14960 command_runner.go:130] > [INFO] 10.244.0.3:36431 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001956s
	I0419 18:59:09.284477   14960 command_runner.go:130] > [INFO] 10.244.0.3:38168 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000222s
	I0419 18:59:09.284549   14960 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0419 18:59:09.284549   14960 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0419 18:59:09.287225   14960 logs.go:123] Gathering logs for kube-proxy [a6586791413d] ...
	I0419 18:59:09.287225   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6586791413d"
	I0419 18:59:09.315630   14960 command_runner.go:130] ! I0420 01:35:26.120497       1 server_linux.go:69] "Using iptables proxy"
	I0419 18:59:09.315630   14960 command_runner.go:130] ! I0420 01:35:26.156956       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.42.231"]
	I0419 18:59:09.315630   14960 command_runner.go:130] ! I0420 01:35:26.208282       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 18:59:09.316431   14960 command_runner.go:130] ! I0420 01:35:26.208472       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 18:59:09.316431   14960 command_runner.go:130] ! I0420 01:35:26.208501       1 server_linux.go:165] "Using iptables Proxier"
	I0419 18:59:09.316461   14960 command_runner.go:130] ! I0420 01:35:26.214693       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 18:59:09.316461   14960 command_runner.go:130] ! I0420 01:35:26.216114       1 server.go:872] "Version info" version="v1.30.0"
	I0419 18:59:09.316461   14960 command_runner.go:130] ! I0420 01:35:26.216181       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:09.316461   14960 command_runner.go:130] ! I0420 01:35:26.219192       1 config.go:192] "Starting service config controller"
	I0419 18:59:09.316461   14960 command_runner.go:130] ! I0420 01:35:26.219810       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 18:59:09.316461   14960 command_runner.go:130] ! I0420 01:35:26.220079       1 config.go:101] "Starting endpoint slice config controller"
	I0419 18:59:09.316461   14960 command_runner.go:130] ! I0420 01:35:26.220093       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 18:59:09.316582   14960 command_runner.go:130] ! I0420 01:35:26.221802       1 config.go:319] "Starting node config controller"
	I0419 18:59:09.316582   14960 command_runner.go:130] ! I0420 01:35:26.221980       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 18:59:09.316644   14960 command_runner.go:130] ! I0420 01:35:26.320313       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 18:59:09.316644   14960 command_runner.go:130] ! I0420 01:35:26.320380       1 shared_informer.go:320] Caches are synced for service config
	I0419 18:59:09.316644   14960 command_runner.go:130] ! I0420 01:35:26.322323       1 shared_informer.go:320] Caches are synced for node config
	I0419 18:59:09.319064   14960 logs.go:123] Gathering logs for kubelet ...
	I0419 18:59:09.319150   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 18:59:09.351267   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0419 18:59:09.351340   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: I0420 01:57:51.575772    1390 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0419 18:59:09.351340   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: I0420 01:57:51.576306    1390 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:09.351395   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: I0420 01:57:51.577194    1390 server.go:927] "Client rotation is on, will bootstrap in background"
	I0419 18:59:09.351430   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: E0420 01:57:51.579651    1390 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0419 18:59:09.351474   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:09.351515   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0419 18:59:09.351515   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0419 18:59:09.351563   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0419 18:59:09.351563   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0419 18:59:09.351563   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: I0420 01:57:52.300689    1443 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0419 18:59:09.351602   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: I0420 01:57:52.301056    1443 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:09.351649   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: I0420 01:57:52.301551    1443 server.go:927] "Client rotation is on, will bootstrap in background"
	I0419 18:59:09.351689   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: E0420 01:57:52.301845    1443 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0419 18:59:09.351689   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:09.351740   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0419 18:59:09.351740   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0419 18:59:09.351798   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0419 18:59:09.351798   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.955182    1526 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0419 18:59:09.351839   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.955367    1526 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:09.351839   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.955676    1526 server.go:927] "Client rotation is on, will bootstrap in background"
	I0419 18:59:09.351884   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.957661    1526 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0419 18:59:09.351884   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.971626    1526 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:09.351939   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.998144    1526 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0419 18:59:09.351976   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.998312    1526 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0419 18:59:09.351976   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.999775    1526 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0419 18:59:09.352115   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:54.999948    1526 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-348000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0419 18:59:09.352150   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.000770    1526 topology_manager.go:138] "Creating topology manager with none policy"
	I0419 18:59:09.352150   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.000879    1526 container_manager_linux.go:301] "Creating device plugin manager"
	I0419 18:59:09.352197   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.001855    1526 state_mem.go:36] "Initialized new in-memory state store"
	I0419 18:59:09.352197   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.003861    1526 kubelet.go:400] "Attempting to sync node with API server"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.003952    1526 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.004045    1526 kubelet.go:312] "Adding apiserver pod source"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.009472    1526 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.017989    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.018091    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.019381    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.019428    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.019619    1526 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.1" apiVersion="v1"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.022328    1526 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.023051    1526 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.025680    1526 server.go:1264] "Started kubelet"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.028955    1526 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.031361    1526 server.go:455] "Adding debug handlers to kubelet server"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.034499    1526 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.035670    1526 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.036524    1526 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.19.42.24:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-348000.17c7da5cb9bb1787  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-348000,UID:multinode-348000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-348000,},FirstTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,LastTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-348
000,}"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.053292    1526 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.062175    1526 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.067879    1526 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.097159    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="200ms"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.116285    1526 factory.go:221] Registration of the systemd container factory successfully
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.117073    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.118285    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.352809   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.117970    1526 reconciler.go:26] "Reconciler: start to sync state"
	I0419 18:59:09.352809   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.118962    1526 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0419 18:59:09.352856   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.119576    1526 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0419 18:59:09.352856   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.135081    1526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0419 18:59:09.352912   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.165861    1526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0419 18:59:09.352944   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166700    1526 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166759    1526 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166846    1526 state_mem.go:36] "Initialized new in-memory state store"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166997    1526 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168395    1526 kubelet.go:2337] "Starting kubelet main sync loop"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.168500    1526 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168338    1526 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168585    1526 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168613    1526 policy_none.go:49] "None policy: Start"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.167637    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.171087    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.172453    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.172557    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.187830    1526 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.187946    1526 state_mem.go:35] "Initializing new in-memory state store"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.189368    1526 state_mem.go:75] "Updated machine memory state"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.195268    1526 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.195483    1526 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.197626    1526 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.198638    1526 iptables.go:577] "Could not set up iptables canary" err=<
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.201551    1526 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-348000\" not found"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.269451    1526 topology_manager.go:215] "Topology Admit Handler" podUID="30aa2729d0c65b9f89e1ae2d151edd9b" podNamespace="kube-system" podName="kube-controller-manager-multinode-348000"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.271913    1526 topology_manager.go:215] "Topology Admit Handler" podUID="92813b2aed63b63058d3fd06709fa24e" podNamespace="kube-system" podName="kube-scheduler-multinode-348000"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.273779    1526 topology_manager.go:215] "Topology Admit Handler" podUID="af7a3c9321ace7e2a933260472b90113" podNamespace="kube-system" podName="kube-apiserver-multinode-348000"
	I0419 18:59:09.353645   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.275662    1526 topology_manager.go:215] "Topology Admit Handler" podUID="c0cfa3da6a3913c3e67500f6c3e9d72b" podNamespace="kube-system" podName="etcd-multinode-348000"
	I0419 18:59:09.353645   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.281258    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="476e3efb38684054cbc21c027cf1ddd3f9ca47bb829786f8636fd877fd4b2f81"
	I0419 18:59:09.353645   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.281433    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dd294415aae178d6b9bed0368d49bedc6d0afa8f5b9ad0011c73ffcb2c24b3c"
	I0419 18:59:09.353645   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.281454    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5d733991bf1a9e82ffd10768e0652c6c3f983ab24307142345cab3358f068bc"
	I0419 18:59:09.353645   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.297657    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd9e5fae3950c99e6cc71d6166919d407b00212c93827d74e5b83f3896925c0a"
	I0419 18:59:09.353843   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.310354    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="400ms"
	I0419 18:59:09.353843   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.316552    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="187cb57784f4ebcba88e5bf725c118a7d2beec4f543d3864e8f389573f0b11f9"
	I0419 18:59:09.353843   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.332421    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e420625b84be10aa87409a43f4296165b33ed76e82c3ba8a9214abd7177bd38"
	I0419 18:59:09.354005   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.356050    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00d48e11227effb5f0316d58c24e374b4b3f9dcd1b98ac51d6b0038a72d47e42"
	I0419 18:59:09.354005   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.372330    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:09.354005   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.373779    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:09.354088   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.376042    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da1d06ec238f43c7ad43cae75e142a6d15b9c8fb69f88ad8079f167f3f3a6fd4"
	I0419 18:59:09.354088   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.392858    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7935893e9f22a54393d2b3d0a644f7c11a848d5604938074232342a8602e239f"
	I0419 18:59:09.354088   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423082    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-ca-certs\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:09.354173   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423312    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-flexvolume-dir\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423400    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-k8s-certs\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423427    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-kubeconfig\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423456    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af7a3c9321ace7e2a933260472b90113-ca-certs\") pod \"kube-apiserver-multinode-348000\" (UID: \"af7a3c9321ace7e2a933260472b90113\") " pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423489    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/c0cfa3da6a3913c3e67500f6c3e9d72b-etcd-data\") pod \"etcd-multinode-348000\" (UID: \"c0cfa3da6a3913c3e67500f6c3e9d72b\") " pod="kube-system/etcd-multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423525    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423552    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92813b2aed63b63058d3fd06709fa24e-kubeconfig\") pod \"kube-scheduler-multinode-348000\" (UID: \"92813b2aed63b63058d3fd06709fa24e\") " pod="kube-system/kube-scheduler-multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423669    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af7a3c9321ace7e2a933260472b90113-k8s-certs\") pod \"kube-apiserver-multinode-348000\" (UID: \"af7a3c9321ace7e2a933260472b90113\") " pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423703    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af7a3c9321ace7e2a933260472b90113-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-348000\" (UID: \"af7a3c9321ace7e2a933260472b90113\") " pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423739    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/c0cfa3da6a3913c3e67500f6c3e9d72b-etcd-certs\") pod \"etcd-multinode-348000\" (UID: \"c0cfa3da6a3913c3e67500f6c3e9d72b\") " pod="kube-system/etcd-multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.518144    1526 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.19.42.24:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-348000.17c7da5cb9bb1787  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-348000,UID:multinode-348000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-348000,},FirstTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,LastTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-348
000,}"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.713067    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="800ms"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.777032    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.778597    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.832721    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.354831   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.832971    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.354831   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: W0420 01:57:56.061439    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.354831   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.063005    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.354915   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: W0420 01:57:56.073517    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.354915   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.073647    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.354989   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: W0420 01:57:56.303763    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.355060   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.303918    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.355060   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.515345    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="1.6s"
	I0419 18:59:09.355060   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: I0420 01:57:56.583532    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:09.355166   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.584646    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:09.355166   14960 command_runner.go:130] > Apr 20 01:57:58 multinode-348000 kubelet[1526]: I0420 01:57:58.185924    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:09.355166   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.850138    1526 kubelet_node_status.go:112] "Node was previously registered" node="multinode-348000"
	I0419 18:59:09.355241   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.850459    1526 kubelet_node_status.go:76] "Successfully registered node" node="multinode-348000"
	I0419 18:59:09.355241   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.852895    1526 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0419 18:59:09.355241   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.854574    1526 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0419 18:59:09.355340   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.855598    1526 setters.go:580] "Node became not ready" node="multinode-348000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-04-20T01:58:00Z","lastTransitionTime":"2024-04-20T01:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0419 18:59:09.355340   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.022496    1526 apiserver.go:52] "Watching apiserver"
	I0419 18:59:09.355340   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.028549    1526 topology_manager.go:215] "Topology Admit Handler" podUID="274342c4-c21f-4279-b0ea-743d8e2c1463" podNamespace="kube-system" podName="kube-proxy-kj76x"
	I0419 18:59:09.355413   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.028950    1526 topology_manager.go:215] "Topology Admit Handler" podUID="46c91d5e-edfa-4254-a802-148047caeab5" podNamespace="kube-system" podName="kindnet-s4fsr"
	I0419 18:59:09.355413   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.029150    1526 topology_manager.go:215] "Topology Admit Handler" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7w477"
	I0419 18:59:09.355413   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.029359    1526 topology_manager.go:215] "Topology Admit Handler" podUID="ffa0cfb9-91fb-4d5b-abe7-11992c731b74" podNamespace="kube-system" podName="storage-provisioner"
	I0419 18:59:09.355590   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.029596    1526 topology_manager.go:215] "Topology Admit Handler" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916" podNamespace="default" podName="busybox-fc5497c4f-xnz2k"
	I0419 18:59:09.355590   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.030004    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.355699   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.030339    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-348000" podUID="af4afa87-c484-4b73-9a4d-e86ddcd90049"
	I0419 18:59:09.355699   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.031127    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-348000" podUID="18f5e677-6a96-47ee-9f61-60ab9445eb92"
	I0419 18:59:09.355767   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.036486    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.355767   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.078433    1526 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-348000"
	I0419 18:59:09.355767   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.080072    1526 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0419 18:59:09.355836   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.080948    1526 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:09.355836   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.155980    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/274342c4-c21f-4279-b0ea-743d8e2c1463-xtables-lock\") pod \"kube-proxy-kj76x\" (UID: \"274342c4-c21f-4279-b0ea-743d8e2c1463\") " pod="kube-system/kube-proxy-kj76x"
	I0419 18:59:09.355906   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.156217    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/274342c4-c21f-4279-b0ea-743d8e2c1463-lib-modules\") pod \"kube-proxy-kj76x\" (UID: \"274342c4-c21f-4279-b0ea-743d8e2c1463\") " pod="kube-system/kube-proxy-kj76x"
	I0419 18:59:09.355993   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157104    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/46c91d5e-edfa-4254-a802-148047caeab5-cni-cfg\") pod \"kindnet-s4fsr\" (UID: \"46c91d5e-edfa-4254-a802-148047caeab5\") " pod="kube-system/kindnet-s4fsr"
	I0419 18:59:09.355993   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157248    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46c91d5e-edfa-4254-a802-148047caeab5-xtables-lock\") pod \"kindnet-s4fsr\" (UID: \"46c91d5e-edfa-4254-a802-148047caeab5\") " pod="kube-system/kindnet-s4fsr"
	I0419 18:59:09.356065   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.157178    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:09.356065   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.157539    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:01.657504317 +0000 UTC m=+6.817666984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:09.356134   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157392    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ffa0cfb9-91fb-4d5b-abe7-11992c731b74-tmp\") pod \"storage-provisioner\" (UID: \"ffa0cfb9-91fb-4d5b-abe7-11992c731b74\") " pod="kube-system/storage-provisioner"
	I0419 18:59:09.356134   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157844    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46c91d5e-edfa-4254-a802-148047caeab5-lib-modules\") pod \"kindnet-s4fsr\" (UID: \"46c91d5e-edfa-4254-a802-148047caeab5\") " pod="kube-system/kindnet-s4fsr"
	I0419 18:59:09.356243   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.176143    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89aa15d5f8e328791151d96100a36918" path="/var/lib/kubelet/pods/89aa15d5f8e328791151d96100a36918/volumes"
	I0419 18:59:09.356243   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.179130    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fef0b92f87f018a58c19217fdf5d6e1" path="/var/lib/kubelet/pods/8fef0b92f87f018a58c19217fdf5d6e1/volumes"
	I0419 18:59:09.356243   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.206903    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.356243   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.207139    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.356382   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.207264    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:01.707244177 +0000 UTC m=+6.867406744 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.356453   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.241569    1526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-348000" podStartSLOduration=0.241545984 podStartE2EDuration="241.545984ms" podCreationTimestamp="2024-04-20 01:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-20 01:58:01.218870918 +0000 UTC m=+6.379033485" watchObservedRunningTime="2024-04-20 01:58:01.241545984 +0000 UTC m=+6.401708551"
	I0419 18:59:09.356453   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.287607    1526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-348000" podStartSLOduration=0.287584435 podStartE2EDuration="287.584435ms" podCreationTimestamp="2024-04-20 01:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-20 01:58:01.265671392 +0000 UTC m=+6.425834059" watchObservedRunningTime="2024-04-20 01:58:01.287584435 +0000 UTC m=+6.447747102"
	I0419 18:59:09.356535   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.663973    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:09.356535   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.664077    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:02.664058382 +0000 UTC m=+7.824220949 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:09.356604   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.764474    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.356655   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.764518    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.356688   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.764584    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:02.764566131 +0000 UTC m=+7.924728698 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.356688   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: I0420 01:58:02.563904    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5a777eba295e3b640d8d8a60aedcc20243d0f4a6fc4d3f3391b06fc6de0247a"
	I0419 18:59:09.356798   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.564077    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.356798   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: I0420 01:58:02.565075    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-348000" podUID="af4afa87-c484-4b73-9a4d-e86ddcd90049"
	I0419 18:59:09.356798   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.679358    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:09.356876   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.679588    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:04.67956768 +0000 UTC m=+9.839730247 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:09.356970   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.789713    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.356970   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.791860    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357054   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.792206    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:04.792183185 +0000 UTC m=+9.952345752 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357054   14960 command_runner.go:130] > Apr 20 01:58:03 multinode-348000 kubelet[1526]: E0420 01:58:03.170851    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.357125   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.169519    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.357125   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.700421    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:09.357216   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.700676    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:08.700644486 +0000 UTC m=+13.860807053 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:09.357216   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.801637    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357287   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.801751    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357287   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.801874    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:08.801835856 +0000 UTC m=+13.961998423 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357356   14960 command_runner.go:130] > Apr 20 01:58:05 multinode-348000 kubelet[1526]: E0420 01:58:05.169947    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.357424   14960 command_runner.go:130] > Apr 20 01:58:06 multinode-348000 kubelet[1526]: E0420 01:58:06.169499    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.357424   14960 command_runner.go:130] > Apr 20 01:58:07 multinode-348000 kubelet[1526]: E0420 01:58:07.170147    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.357516   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.169208    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.357516   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.751778    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:09.357585   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.752347    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:16.752328447 +0000 UTC m=+21.912491114 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:09.357614   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.852291    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.852347    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.852455    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:16.852435774 +0000 UTC m=+22.012598341 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:09 multinode-348000 kubelet[1526]: E0420 01:58:09.169017    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:10 multinode-348000 kubelet[1526]: E0420 01:58:10.169399    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:11 multinode-348000 kubelet[1526]: E0420 01:58:11.169467    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:12 multinode-348000 kubelet[1526]: E0420 01:58:12.169441    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:13 multinode-348000 kubelet[1526]: E0420 01:58:13.169983    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:14 multinode-348000 kubelet[1526]: E0420 01:58:14.169635    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:15 multinode-348000 kubelet[1526]: E0420 01:58:15.169488    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.169756    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.835157    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.835299    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:32.835279204 +0000 UTC m=+37.995441771 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.936116    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.936169    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.936232    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:32.936212581 +0000 UTC m=+38.096375148 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:17 multinode-348000 kubelet[1526]: E0420 01:58:17.169160    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.358214   14960 command_runner.go:130] > Apr 20 01:58:18 multinode-348000 kubelet[1526]: E0420 01:58:18.171760    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.358214   14960 command_runner.go:130] > Apr 20 01:58:19 multinode-348000 kubelet[1526]: E0420 01:58:19.169723    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.358214   14960 command_runner.go:130] > Apr 20 01:58:20 multinode-348000 kubelet[1526]: E0420 01:58:20.169542    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.358214   14960 command_runner.go:130] > Apr 20 01:58:21 multinode-348000 kubelet[1526]: E0420 01:58:21.169675    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:22 multinode-348000 kubelet[1526]: E0420 01:58:22.169364    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: E0420 01:58:23.169569    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: I0420 01:58:23.960680    1526 scope.go:117] "RemoveContainer" containerID="8a37c65d06fabf8d836ffb9a511bb6df5b549fa37051ef79f1f839076af60512"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: I0420 01:58:23.961154    1526 scope.go:117] "RemoveContainer" containerID="f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: E0420 01:58:23.961603    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kindnet-cni pod=kindnet-s4fsr_kube-system(46c91d5e-edfa-4254-a802-148047caeab5)\"" pod="kube-system/kindnet-s4fsr" podUID="46c91d5e-edfa-4254-a802-148047caeab5"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:24 multinode-348000 kubelet[1526]: E0420 01:58:24.169608    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:25 multinode-348000 kubelet[1526]: E0420 01:58:25.169976    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:26 multinode-348000 kubelet[1526]: E0420 01:58:26.169734    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:27 multinode-348000 kubelet[1526]: E0420 01:58:27.170054    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:28 multinode-348000 kubelet[1526]: E0420 01:58:28.169260    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:29 multinode-348000 kubelet[1526]: E0420 01:58:29.169306    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:30 multinode-348000 kubelet[1526]: E0420 01:58:30.169857    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:31 multinode-348000 kubelet[1526]: E0420 01:58:31.169543    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.169556    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.891318    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.891496    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:59:04.891477649 +0000 UTC m=+70.051640216 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.992269    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.992577    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.358958   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.992723    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:59:04.992688767 +0000 UTC m=+70.152851434 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.358958   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: I0420 01:58:33.115355    1526 scope.go:117] "RemoveContainer" containerID="e248c230a4aa379bf469f41a95d1ea2033316d322a10b6da0ae06f656334b936"
	I0419 18:59:09.358958   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: I0420 01:58:33.115897    1526 scope.go:117] "RemoveContainer" containerID="45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702"
	I0419 18:59:09.359099   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: E0420 01:58:33.116183    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ffa0cfb9-91fb-4d5b-abe7-11992c731b74)\"" pod="kube-system/storage-provisioner" podUID="ffa0cfb9-91fb-4d5b-abe7-11992c731b74"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: E0420 01:58:33.169303    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:34 multinode-348000 kubelet[1526]: E0420 01:58:34.169175    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:35 multinode-348000 kubelet[1526]: E0420 01:58:35.169508    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 kubelet[1526]: E0420 01:58:36.169960    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 kubelet[1526]: I0420 01:58:36.170769    1526 scope.go:117] "RemoveContainer" containerID="f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:37 multinode-348000 kubelet[1526]: E0420 01:58:37.171433    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:38 multinode-348000 kubelet[1526]: E0420 01:58:38.169747    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:39 multinode-348000 kubelet[1526]: E0420 01:58:39.169252    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:40 multinode-348000 kubelet[1526]: E0420 01:58:40.169368    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:40 multinode-348000 kubelet[1526]: I0420 01:58:40.269590    1526 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 kubelet[1526]: I0420 01:58:45.169759    1526 scope.go:117] "RemoveContainer" containerID="45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]: I0420 01:58:55.162183    1526 scope.go:117] "RemoveContainer" containerID="490377504e57c3189163833390967e79bb80d222691d4402677feb6f25ed22f4"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]: I0420 01:58:55.206283    1526 scope.go:117] "RemoveContainer" containerID="53f6a00490766be2eb687e6fff052ca7a46ae16a0baf4551e956c81550d673b2"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]: E0420 01:58:55.212558    1526 iptables.go:577] "Could not set up iptables canary" err=<
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 kubelet[1526]: I0420 01:59:05.918992    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75ff9f4e9dde29a997e4321dd3659a2ce7d479a75826a78c4d3525f1eb5f696f"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 kubelet[1526]: I0420 01:59:05.948376    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f28a1e746a9b438367a8e05d2e1a085afb4abec4174f7a7eb80549e02b95047a"
	I0419 18:59:09.402632   14960 logs.go:123] Gathering logs for kube-apiserver [bd3aa93bac25] ...
	I0419 18:59:09.402632   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd3aa93bac25"
	I0419 18:59:09.437496   14960 command_runner.go:130] ! I0420 01:57:57.501840       1 options.go:221] external host was not specified, using 172.19.42.24
	I0419 18:59:09.437570   14960 command_runner.go:130] ! I0420 01:57:57.505380       1 server.go:148] Version: v1.30.0
	I0419 18:59:09.438968   14960 command_runner.go:130] ! I0420 01:57:57.505690       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:09.439029   14960 command_runner.go:130] ! I0420 01:57:58.138487       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0419 18:59:09.439077   14960 command_runner.go:130] ! I0420 01:57:58.138530       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0419 18:59:09.439112   14960 command_runner.go:130] ! I0420 01:57:58.138987       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0419 18:59:09.439148   14960 command_runner.go:130] ! I0420 01:57:58.139098       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 18:59:09.439201   14960 command_runner.go:130] ! I0420 01:57:58.139890       1 instance.go:299] Using reconciler: lease
	I0419 18:59:09.439236   14960 command_runner.go:130] ! I0420 01:57:59.078678       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0419 18:59:09.439236   14960 command_runner.go:130] ! W0420 01:57:59.078889       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439266   14960 command_runner.go:130] ! I0420 01:57:59.354874       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.355339       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.630985       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.818361       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.834974       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.835019       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.835028       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.835870       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.835981       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.837241       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.838781       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.838919       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.838930       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.841133       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.841240       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.842492       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.842627       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.842640       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.843439       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.843519       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.843649       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.844516       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.847031       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.847132       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.847143       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.847848       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.847881       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.847889       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.849069       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.849173       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.851437       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.851563       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.851574       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.852258       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.852357       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439879   14960 command_runner.go:130] ! W0420 01:57:59.852367       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:09.439879   14960 command_runner.go:130] ! I0420 01:57:59.855318       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0419 18:59:09.439879   14960 command_runner.go:130] ! W0420 01:57:59.855413       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439879   14960 command_runner.go:130] ! W0420 01:57:59.855499       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:09.439879   14960 command_runner.go:130] ! I0420 01:57:59.857232       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0419 18:59:09.439879   14960 command_runner.go:130] ! I0420 01:57:59.859073       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0419 18:59:09.439879   14960 command_runner.go:130] ! W0420 01:57:59.859177       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0419 18:59:09.439879   14960 command_runner.go:130] ! W0420 01:57:59.859187       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439879   14960 command_runner.go:130] ! I0420 01:57:59.866540       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0419 18:59:09.440024   14960 command_runner.go:130] ! W0420 01:57:59.866633       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0419 18:59:09.440024   14960 command_runner.go:130] ! W0420 01:57:59.866643       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:57:59.873672       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0419 18:59:09.440091   14960 command_runner.go:130] ! W0420 01:57:59.873814       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.440091   14960 command_runner.go:130] ! W0420 01:57:59.873827       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:57:59.875959       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0419 18:59:09.440091   14960 command_runner.go:130] ! W0420 01:57:59.875999       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:57:59.909243       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0419 18:59:09.440091   14960 command_runner.go:130] ! W0420 01:57:59.909284       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.597195       1 secure_serving.go:213] Serving securely on [::]:8443
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.597666       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.598134       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.597703       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.597737       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.600064       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.600948       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.601165       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.601445       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.602539       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.602852       1 aggregator.go:163] waiting for initial CRD sync...
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.603187       1 controller.go:78] Starting OpenAPI AggregationController
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.604023       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.604384       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.606631       1 available_controller.go:423] Starting AvailableConditionController
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.606857       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.607138       1 controller.go:116] Starting legacy_token_tracking_controller
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.607178       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.607325       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.607349       1 controller.go:139] Starting OpenAPI controller
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.607381       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.607407       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.607409       1 naming_controller.go:291] Starting NamingConditionController
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.607487       1 establishing_controller.go:76] Starting EstablishingController
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.607512       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.607530       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0419 18:59:09.440629   14960 command_runner.go:130] ! I0420 01:58:00.607546       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0419 18:59:09.440629   14960 command_runner.go:130] ! I0420 01:58:00.608170       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0419 18:59:09.440629   14960 command_runner.go:130] ! I0420 01:58:00.608198       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0419 18:59:09.440629   14960 command_runner.go:130] ! I0420 01:58:00.608328       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:09.440629   14960 command_runner.go:130] ! I0420 01:58:00.608421       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:09.440629   14960 command_runner.go:130] ! I0420 01:58:00.607383       1 controller.go:87] Starting OpenAPI V3 controller
	I0419 18:59:09.440719   14960 command_runner.go:130] ! I0420 01:58:00.709605       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0419 18:59:09.440719   14960 command_runner.go:130] ! I0420 01:58:00.736531       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0419 18:59:09.440762   14960 command_runner.go:130] ! I0420 01:58:00.737086       1 shared_informer.go:320] Caches are synced for configmaps
	I0419 18:59:09.440798   14960 command_runner.go:130] ! I0420 01:58:00.737192       1 aggregator.go:165] initial CRD sync complete...
	I0419 18:59:09.440798   14960 command_runner.go:130] ! I0420 01:58:00.737219       1 autoregister_controller.go:141] Starting autoregister controller
	I0419 18:59:09.440798   14960 command_runner.go:130] ! I0420 01:58:00.737225       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0419 18:59:09.440798   14960 command_runner.go:130] ! I0420 01:58:00.737230       1 cache.go:39] Caches are synced for autoregister controller
	I0419 18:59:09.440798   14960 command_runner.go:130] ! I0420 01:58:00.740699       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 18:59:09.440877   14960 command_runner.go:130] ! I0420 01:58:00.741004       1 policy_source.go:224] refreshing policies
	I0419 18:59:09.440877   14960 command_runner.go:130] ! I0420 01:58:00.742672       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0419 18:59:09.440877   14960 command_runner.go:130] ! I0420 01:58:00.747054       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0419 18:59:09.440959   14960 command_runner.go:130] ! I0420 01:58:00.805770       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0419 18:59:09.440959   14960 command_runner.go:130] ! I0420 01:58:00.807460       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0419 18:59:09.440959   14960 command_runner.go:130] ! I0420 01:58:00.814456       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0419 18:59:09.440959   14960 command_runner.go:130] ! I0420 01:58:00.814490       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0419 18:59:09.441036   14960 command_runner.go:130] ! I0420 01:58:00.815844       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0419 18:59:09.441036   14960 command_runner.go:130] ! I0420 01:58:01.612010       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0419 18:59:09.441036   14960 command_runner.go:130] ! W0420 01:58:02.160618       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.42.231 172.19.42.24]
	I0419 18:59:09.441036   14960 command_runner.go:130] ! I0420 01:58:02.163332       1 controller.go:615] quota admission added evaluator for: endpoints
	I0419 18:59:09.441036   14960 command_runner.go:130] ! I0420 01:58:02.176968       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0419 18:59:09.441113   14960 command_runner.go:130] ! I0420 01:58:03.430204       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0419 18:59:09.441113   14960 command_runner.go:130] ! I0420 01:58:03.761410       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0419 18:59:09.441113   14960 command_runner.go:130] ! I0420 01:58:03.780335       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0419 18:59:09.441191   14960 command_runner.go:130] ! I0420 01:58:03.907022       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0419 18:59:09.441191   14960 command_runner.go:130] ! I0420 01:58:03.924019       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0419 18:59:09.441191   14960 command_runner.go:130] ! W0420 01:58:22.143512       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.42.24]
	I0419 18:59:09.449271   14960 logs.go:123] Gathering logs for kube-scheduler [e476774b8f77] ...
	I0419 18:59:09.449271   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e476774b8f77"
	I0419 18:59:09.482628   14960 command_runner.go:130] ! I0420 01:35:03.474569       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:09.483133   14960 command_runner.go:130] ! W0420 01:35:04.965330       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0419 18:59:09.483133   14960 command_runner.go:130] ! W0420 01:35:04.965379       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:09.483357   14960 command_runner.go:130] ! W0420 01:35:04.965392       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0419 18:59:09.483391   14960 command_runner.go:130] ! W0420 01:35:04.965399       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0419 18:59:09.483476   14960 command_runner.go:130] ! I0420 01:35:05.040739       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0419 18:59:09.483476   14960 command_runner.go:130] ! I0420 01:35:05.040800       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:09.483476   14960 command_runner.go:130] ! I0420 01:35:05.044777       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0419 18:59:09.483476   14960 command_runner.go:130] ! I0420 01:35:05.045192       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 18:59:09.483556   14960 command_runner.go:130] ! I0420 01:35:05.045423       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:09.483556   14960 command_runner.go:130] ! I0420 01:35:05.046180       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:09.483609   14960 command_runner.go:130] ! W0420 01:35:05.063208       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:09.483609   14960 command_runner.go:130] ! E0420 01:35:05.064240       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:09.483676   14960 command_runner.go:130] ! W0420 01:35:05.063609       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.483676   14960 command_runner.go:130] ! E0420 01:35:05.065130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.483731   14960 command_runner.go:130] ! W0420 01:35:05.063676       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! E0420 01:35:05.065433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! W0420 01:35:05.063732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! E0420 01:35:05.065801       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! W0420 01:35:05.063780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! E0420 01:35:05.066820       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! W0420 01:35:05.063927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! E0420 01:35:05.067122       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! W0420 01:35:05.063973       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! E0420 01:35:05.069517       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! W0420 01:35:05.064025       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! E0420 01:35:05.069884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! W0420 01:35:05.064095       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! E0420 01:35:05.070309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! W0420 01:35:05.064163       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! E0420 01:35:05.070884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! W0420 01:35:05.070236       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! E0420 01:35:05.071293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! W0420 01:35:05.070677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! E0420 01:35:05.072125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! W0420 01:35:05.070741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! E0420 01:35:05.073528       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! W0420 01:35:05.072410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! E0420 01:35:05.073910       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! W0420 01:35:05.072540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! E0420 01:35:05.074332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! W0420 01:35:05.987809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! E0420 01:35:05.988072       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! W0420 01:35:06.078924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! E0420 01:35:06.079045       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! W0420 01:35:06.146102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! E0420 01:35:06.146225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! W0420 01:35:06.213142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! E0420 01:35:06.213279       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! W0420 01:35:06.278808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.484979   14960 command_runner.go:130] ! E0420 01:35:06.279232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.484979   14960 command_runner.go:130] ! W0420 01:35:06.310265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:09.484979   14960 command_runner.go:130] ! E0420 01:35:06.311126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:09.484979   14960 command_runner.go:130] ! W0420 01:35:06.333128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:09.485195   14960 command_runner.go:130] ! E0420 01:35:06.333531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:09.485195   14960 command_runner.go:130] ! W0420 01:35:06.355993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:09.485195   14960 command_runner.go:130] ! E0420 01:35:06.356053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:09.485195   14960 command_runner.go:130] ! W0420 01:35:06.356154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:09.485195   14960 command_runner.go:130] ! E0420 01:35:06.356365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:09.485195   14960 command_runner.go:130] ! W0420 01:35:06.490128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:09.485361   14960 command_runner.go:130] ! E0420 01:35:06.490240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:09.485361   14960 command_runner.go:130] ! W0420 01:35:06.496247       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:09.485458   14960 command_runner.go:130] ! E0420 01:35:06.496709       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:09.485458   14960 command_runner.go:130] ! W0420 01:35:06.552817       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.485538   14960 command_runner.go:130] ! E0420 01:35:06.552917       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.485538   14960 command_runner.go:130] ! W0420 01:35:06.607496       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.485593   14960 command_runner.go:130] ! E0420 01:35:06.607914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.485666   14960 command_runner.go:130] ! W0420 01:35:06.608255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:09.485713   14960 command_runner.go:130] ! E0420 01:35:06.608488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:09.485713   14960 command_runner.go:130] ! W0420 01:35:06.623642       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:09.485713   14960 command_runner.go:130] ! E0420 01:35:06.624029       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:09.485713   14960 command_runner.go:130] ! I0420 01:35:09.746203       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:09.485827   14960 command_runner.go:130] ! I0420 01:55:30.893306       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0419 18:59:09.485827   14960 command_runner.go:130] ! I0420 01:55:30.893359       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0419 18:59:09.485827   14960 command_runner.go:130] ! I0420 01:55:30.893732       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 18:59:09.485827   14960 command_runner.go:130] ! E0420 01:55:30.894682       1 run.go:74] "command failed" err="finished without leader elect"
	I0419 18:59:09.497120   14960 logs.go:123] Gathering logs for kindnet [ae0b21715f86] ...
	I0419 18:59:09.497120   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0b21715f86"
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:36.715209       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:36.715359       1 main.go:107] hostIP = 172.19.42.24
	I0419 18:59:09.524085   14960 command_runner.go:130] ! podIP = 172.19.42.24
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:36.715480       1 main.go:116] setting mtu 1500 for CNI 
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:36.715877       1 main.go:146] kindnetd IP family: "ipv4"
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:36.806023       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:37.413197       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:37.413291       1 main.go:227] handling current node
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:37.413685       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:37.413745       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:37.414005       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.19.32.249 Flags: [] Table: 0} 
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:37.506308       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:37.506405       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:37.506676       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.19.37.59 Flags: [] Table: 0} 
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:47.525508       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:47.525608       1 main.go:227] handling current node
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:47.525629       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:47.525638       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:47.526101       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:47.526135       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:57.538448       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:57.538834       1 main.go:227] handling current node
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:57.538899       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:57.538926       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:57.539176       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:57.539274       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:59:07.555783       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:59:07.555932       1 main.go:227] handling current node
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:59:07.556426       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:59:07.556438       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:59:07.556563       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:59:07.556590       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:09.529572   14960 logs.go:123] Gathering logs for kindnet [f8c798c99407] ...
	I0419 18:59:09.529700   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c798c99407"
	I0419 18:59:09.557319   14960 command_runner.go:130] ! I0420 01:58:03.441751       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0419 18:59:09.557417   14960 command_runner.go:130] ! I0420 01:58:03.511070       1 main.go:107] hostIP = 172.19.42.24
	I0419 18:59:09.557417   14960 command_runner.go:130] ! podIP = 172.19.42.24
	I0419 18:59:09.557417   14960 command_runner.go:130] ! I0420 01:58:03.513110       1 main.go:116] setting mtu 1500 for CNI 
	I0419 18:59:09.557417   14960 command_runner.go:130] ! I0420 01:58:03.513147       1 main.go:146] kindnetd IP family: "ipv4"
	I0419 18:59:09.557417   14960 command_runner.go:130] ! I0420 01:58:03.513182       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0419 18:59:09.557417   14960 command_runner.go:130] ! I0420 01:58:07.011650       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:09.557573   14960 command_runner.go:130] ! I0420 01:58:10.084231       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:09.557573   14960 command_runner.go:130] ! I0420 01:58:13.156371       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:09.557573   14960 command_runner.go:130] ! I0420 01:58:16.227521       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:09.557573   14960 command_runner.go:130] ! I0420 01:58:19.299385       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:09.557573   14960 command_runner.go:130] ! panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:09.557573   14960 command_runner.go:130] ! goroutine 1 [running]:
	I0419 18:59:09.557573   14960 command_runner.go:130] ! main.main()
	I0419 18:59:09.557745   14960 command_runner.go:130] ! 	/go/src/cmd/kindnetd/main.go:195 +0xd3d
	I0419 18:59:09.560359   14960 logs.go:123] Gathering logs for describe nodes ...
	I0419 18:59:09.560429   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 18:59:09.801734   14960 command_runner.go:130] > Name:               multinode-348000
	I0419 18:59:09.802729   14960 command_runner.go:130] > Roles:              control-plane
	I0419 18:59:09.802729   14960 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0419 18:59:09.802781   14960 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0419 18:59:09.802781   14960 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0419 18:59:09.802781   14960 command_runner.go:130] >                     kubernetes.io/hostname=multinode-348000
	I0419 18:59:09.802781   14960 command_runner.go:130] >                     kubernetes.io/os=linux
	I0419 18:59:09.802781   14960 command_runner.go:130] >                     minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	I0419 18:59:09.802781   14960 command_runner.go:130] >                     minikube.k8s.io/name=multinode-348000
	I0419 18:59:09.802781   14960 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0419 18:59:09.802781   14960 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_04_19T18_35_09_0700
	I0419 18:59:09.802868   14960 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0419 18:59:09.802868   14960 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0419 18:59:09.802868   14960 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0419 18:59:09.802914   14960 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0419 18:59:09.802914   14960 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0419 18:59:09.802914   14960 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0419 18:59:09.802914   14960 command_runner.go:130] > CreationTimestamp:  Sat, 20 Apr 2024 01:35:05 +0000
	I0419 18:59:09.802914   14960 command_runner.go:130] > Taints:             <none>
	I0419 18:59:09.802914   14960 command_runner.go:130] > Unschedulable:      false
	I0419 18:59:09.802914   14960 command_runner.go:130] > Lease:
	I0419 18:59:09.802914   14960 command_runner.go:130] >   HolderIdentity:  multinode-348000
	I0419 18:59:09.802914   14960 command_runner.go:130] >   AcquireTime:     <unset>
	I0419 18:59:09.802914   14960 command_runner.go:130] >   RenewTime:       Sat, 20 Apr 2024 01:59:01 +0000
	I0419 18:59:09.802914   14960 command_runner.go:130] > Conditions:
	I0419 18:59:09.802914   14960 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0419 18:59:09.802914   14960 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0419 18:59:09.802914   14960 command_runner.go:130] >   MemoryPressure   False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0419 18:59:09.802914   14960 command_runner.go:130] >   DiskPressure     False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0419 18:59:09.802914   14960 command_runner.go:130] >   PIDPressure      False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0419 18:59:09.802914   14960 command_runner.go:130] >   Ready            True    Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:58:40 +0000   KubeletReady                 kubelet is posting ready status
	I0419 18:59:09.802914   14960 command_runner.go:130] > Addresses:
	I0419 18:59:09.802914   14960 command_runner.go:130] >   InternalIP:  172.19.42.24
	I0419 18:59:09.802914   14960 command_runner.go:130] >   Hostname:    multinode-348000
	I0419 18:59:09.802914   14960 command_runner.go:130] > Capacity:
	I0419 18:59:09.803499   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:09.803499   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:09.803499   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:09.803499   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:09.803499   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:09.803499   14960 command_runner.go:130] > Allocatable:
	I0419 18:59:09.803499   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:09.803499   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:09.803499   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:09.803499   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:09.803499   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:09.803499   14960 command_runner.go:130] > System Info:
	I0419 18:59:09.803499   14960 command_runner.go:130] >   Machine ID:                 bd21fc8af31a4161a4396c16b70a2fc3
	I0419 18:59:09.803637   14960 command_runner.go:130] >   System UUID:                fdc3fb6e-1818-9a4e-b496-b7ed0124a8e6
	I0419 18:59:09.803637   14960 command_runner.go:130] >   Boot ID:                    047b982b-9f97-4a1a-8f8a-a308f369753b
	I0419 18:59:09.803637   14960 command_runner.go:130] >   Kernel Version:             5.10.207
	I0419 18:59:09.803637   14960 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0419 18:59:09.803637   14960 command_runner.go:130] >   Operating System:           linux
	I0419 18:59:09.803637   14960 command_runner.go:130] >   Architecture:               amd64
	I0419 18:59:09.803637   14960 command_runner.go:130] >   Container Runtime Version:  docker://26.0.1
	I0419 18:59:09.803637   14960 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0419 18:59:09.803734   14960 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0419 18:59:09.803734   14960 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0419 18:59:09.803734   14960 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0419 18:59:09.803734   14960 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0419 18:59:09.803734   14960 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0419 18:59:09.803734   14960 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0419 18:59:09.803734   14960 command_runner.go:130] >   default                     busybox-fc5497c4f-xnz2k                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0419 18:59:09.803854   14960 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-7w477                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0419 18:59:09.803854   14960 command_runner.go:130] >   kube-system                 etcd-multinode-348000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         68s
	I0419 18:59:09.803854   14960 command_runner.go:130] >   kube-system                 kindnet-s4fsr                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0419 18:59:09.803854   14960 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-348000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	I0419 18:59:09.803854   14960 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-348000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0419 18:59:09.803854   14960 command_runner.go:130] >   kube-system                 kube-proxy-kj76x                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0419 18:59:09.803974   14960 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-348000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0419 18:59:09.803974   14960 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0419 18:59:09.803974   14960 command_runner.go:130] > Allocated resources:
	I0419 18:59:09.803974   14960 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0419 18:59:09.803974   14960 command_runner.go:130] >   Resource           Requests     Limits
	I0419 18:59:09.803974   14960 command_runner.go:130] >   --------           --------     ------
	I0419 18:59:09.804062   14960 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0419 18:59:09.804062   14960 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0419 18:59:09.804062   14960 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0419 18:59:09.804062   14960 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0419 18:59:09.804062   14960 command_runner.go:130] > Events:
	I0419 18:59:09.804062   14960 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0419 18:59:09.804062   14960 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0419 18:59:09.804144   14960 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0419 18:59:09.804144   14960 command_runner.go:130] >   Normal  Starting                 66s                kube-proxy       
	I0419 18:59:09.804144   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-348000 status is now: NodeHasSufficientPID
	I0419 18:59:09.804144   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:09.804223   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-348000 status is now: NodeHasSufficientMemory
	I0419 18:59:09.804223   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-348000 status is now: NodeHasNoDiskPressure
	I0419 18:59:09.804223   14960 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0419 18:59:09.804223   14960 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-348000 event: Registered Node multinode-348000 in Controller
	I0419 18:59:09.804302   14960 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-348000 status is now: NodeReady
	I0419 18:59:09.804302   14960 command_runner.go:130] >   Normal  Starting                 74s                kubelet          Starting kubelet.
	I0419 18:59:09.804302   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node multinode-348000 status is now: NodeHasSufficientMemory
	I0419 18:59:09.804302   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node multinode-348000 status is now: NodeHasNoDiskPressure
	I0419 18:59:09.804302   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node multinode-348000 status is now: NodeHasSufficientPID
	I0419 18:59:09.804383   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:09.804383   14960 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-348000 event: Registered Node multinode-348000 in Controller
	I0419 18:59:09.804383   14960 command_runner.go:130] > Name:               multinode-348000-m02
	I0419 18:59:09.804383   14960 command_runner.go:130] > Roles:              <none>
	I0419 18:59:09.804383   14960 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0419 18:59:09.804383   14960 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0419 18:59:09.804383   14960 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0419 18:59:09.804383   14960 command_runner.go:130] >                     kubernetes.io/hostname=multinode-348000-m02
	I0419 18:59:09.804383   14960 command_runner.go:130] >                     kubernetes.io/os=linux
	I0419 18:59:09.804465   14960 command_runner.go:130] >                     minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	I0419 18:59:09.804465   14960 command_runner.go:130] >                     minikube.k8s.io/name=multinode-348000
	I0419 18:59:09.804499   14960 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0419 18:59:09.804499   14960 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_04_19T18_38_19_0700
	I0419 18:59:09.804528   14960 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0419 18:59:09.804528   14960 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0419 18:59:09.804528   14960 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0419 18:59:09.804528   14960 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0419 18:59:09.804528   14960 command_runner.go:130] > CreationTimestamp:  Sat, 20 Apr 2024 01:38:18 +0000
	I0419 18:59:09.804528   14960 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0419 18:59:09.804528   14960 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0419 18:59:09.804528   14960 command_runner.go:130] > Unschedulable:      false
	I0419 18:59:09.804528   14960 command_runner.go:130] > Lease:
	I0419 18:59:09.804528   14960 command_runner.go:130] >   HolderIdentity:  multinode-348000-m02
	I0419 18:59:09.804528   14960 command_runner.go:130] >   AcquireTime:     <unset>
	I0419 18:59:09.804528   14960 command_runner.go:130] >   RenewTime:       Sat, 20 Apr 2024 01:54:49 +0000
	I0419 18:59:09.804528   14960 command_runner.go:130] > Conditions:
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0419 18:59:09.804528   14960 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0419 18:59:09.804528   14960 command_runner.go:130] >   MemoryPressure   Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:09.804528   14960 command_runner.go:130] >   DiskPressure     Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:09.804528   14960 command_runner.go:130] >   PIDPressure      Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Ready            Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:09.804528   14960 command_runner.go:130] > Addresses:
	I0419 18:59:09.804528   14960 command_runner.go:130] >   InternalIP:  172.19.32.249
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Hostname:    multinode-348000-m02
	I0419 18:59:09.804528   14960 command_runner.go:130] > Capacity:
	I0419 18:59:09.804528   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:09.804528   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:09.804528   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:09.804528   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:09.804528   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:09.804528   14960 command_runner.go:130] > Allocatable:
	I0419 18:59:09.804528   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:09.804528   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:09.804528   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:09.804528   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:09.804528   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:09.804528   14960 command_runner.go:130] > System Info:
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Machine ID:                 ea453a3100b34d789441206109708446
	I0419 18:59:09.804528   14960 command_runner.go:130] >   System UUID:                9f7972f9-8942-ef4f-b0cf-029b405f5832
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Boot ID:                    d8ef37df-1396-47c1-8bea-04667e5bc60b
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Kernel Version:             5.10.207
	I0419 18:59:09.804528   14960 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Operating System:           linux
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Architecture:               amd64
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Container Runtime Version:  docker://26.0.1
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0419 18:59:09.804528   14960 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0419 18:59:09.804528   14960 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0419 18:59:09.805135   14960 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0419 18:59:09.805135   14960 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0419 18:59:09.805135   14960 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0419 18:59:09.805135   14960 command_runner.go:130] >   default                     busybox-fc5497c4f-2d5hs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0419 18:59:09.805135   14960 command_runner.go:130] >   kube-system                 kindnet-s98rh              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0419 18:59:09.805135   14960 command_runner.go:130] >   kube-system                 kube-proxy-bjv9b           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0419 18:59:09.805135   14960 command_runner.go:130] > Allocated resources:
	I0419 18:59:09.805135   14960 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0419 18:59:09.805135   14960 command_runner.go:130] >   Resource           Requests   Limits
	I0419 18:59:09.805288   14960 command_runner.go:130] >   --------           --------   ------
	I0419 18:59:09.805288   14960 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0419 18:59:09.805288   14960 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0419 18:59:09.805322   14960 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0419 18:59:09.805322   14960 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0419 18:59:09.805322   14960 command_runner.go:130] > Events:
	I0419 18:59:09.805374   14960 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0419 18:59:09.805374   14960 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0419 18:59:09.805409   14960 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0419 18:59:09.805439   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-348000-m02 status is now: NodeHasSufficientMemory
	I0419 18:59:09.805439   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-348000-m02 status is now: NodeHasNoDiskPressure
	I0419 18:59:09.805477   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-348000-m02 status is now: NodeHasSufficientPID
	I0419 18:59:09.805477   14960 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-348000-m02 event: Registered Node multinode-348000-m02 in Controller
	I0419 18:59:09.805477   14960 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-348000-m02 status is now: NodeReady
	I0419 18:59:09.805477   14960 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-348000-m02 event: Registered Node multinode-348000-m02 in Controller
	I0419 18:59:09.805558   14960 command_runner.go:130] >   Normal  NodeNotReady             16s                node-controller  Node multinode-348000-m02 status is now: NodeNotReady
	I0419 18:59:09.805558   14960 command_runner.go:130] > Name:               multinode-348000-m03
	I0419 18:59:09.805558   14960 command_runner.go:130] > Roles:              <none>
	I0419 18:59:09.805558   14960 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0419 18:59:09.805558   14960 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0419 18:59:09.805558   14960 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0419 18:59:09.805639   14960 command_runner.go:130] >                     kubernetes.io/hostname=multinode-348000-m03
	I0419 18:59:09.805639   14960 command_runner.go:130] >                     kubernetes.io/os=linux
	I0419 18:59:09.805639   14960 command_runner.go:130] >                     minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	I0419 18:59:09.805639   14960 command_runner.go:130] >                     minikube.k8s.io/name=multinode-348000
	I0419 18:59:09.805639   14960 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0419 18:59:09.805639   14960 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_04_19T18_53_29_0700
	I0419 18:59:09.805639   14960 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0419 18:59:09.805736   14960 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0419 18:59:09.805736   14960 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0419 18:59:09.805736   14960 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0419 18:59:09.805736   14960 command_runner.go:130] > CreationTimestamp:  Sat, 20 Apr 2024 01:53:28 +0000
	I0419 18:59:09.805736   14960 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0419 18:59:09.805736   14960 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0419 18:59:09.805736   14960 command_runner.go:130] > Unschedulable:      false
	I0419 18:59:09.805736   14960 command_runner.go:130] > Lease:
	I0419 18:59:09.805853   14960 command_runner.go:130] >   HolderIdentity:  multinode-348000-m03
	I0419 18:59:09.805853   14960 command_runner.go:130] >   AcquireTime:     <unset>
	I0419 18:59:09.805853   14960 command_runner.go:130] >   RenewTime:       Sat, 20 Apr 2024 01:54:29 +0000
	I0419 18:59:09.805853   14960 command_runner.go:130] > Conditions:
	I0419 18:59:09.805853   14960 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0419 18:59:09.805853   14960 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0419 18:59:09.805853   14960 command_runner.go:130] >   MemoryPressure   Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:09.805853   14960 command_runner.go:130] >   DiskPressure     Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:09.805971   14960 command_runner.go:130] >   PIDPressure      Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:09.805971   14960 command_runner.go:130] >   Ready            Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:09.805971   14960 command_runner.go:130] > Addresses:
	I0419 18:59:09.805971   14960 command_runner.go:130] >   InternalIP:  172.19.37.59
	I0419 18:59:09.805971   14960 command_runner.go:130] >   Hostname:    multinode-348000-m03
	I0419 18:59:09.805971   14960 command_runner.go:130] > Capacity:
	I0419 18:59:09.805971   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:09.805971   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:09.806065   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:09.806065   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:09.806090   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:09.806090   14960 command_runner.go:130] > Allocatable:
	I0419 18:59:09.806090   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:09.806090   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:09.806090   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:09.806137   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:09.806137   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:09.806137   14960 command_runner.go:130] > System Info:
	I0419 18:59:09.806137   14960 command_runner.go:130] >   Machine ID:                 02e45e9bf03f4852a443a43ac6a8538b
	I0419 18:59:09.806137   14960 command_runner.go:130] >   System UUID:                37a43d59-2157-6e44-8d13-6c975ea12fea
	I0419 18:59:09.806137   14960 command_runner.go:130] >   Boot ID:                    404bc64b-d4fc-4c63-a589-8191649bdfaa
	I0419 18:59:09.806201   14960 command_runner.go:130] >   Kernel Version:             5.10.207
	I0419 18:59:09.806201   14960 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0419 18:59:09.806201   14960 command_runner.go:130] >   Operating System:           linux
	I0419 18:59:09.806201   14960 command_runner.go:130] >   Architecture:               amd64
	I0419 18:59:09.806271   14960 command_runner.go:130] >   Container Runtime Version:  docker://26.0.1
	I0419 18:59:09.806271   14960 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0419 18:59:09.806271   14960 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0419 18:59:09.806271   14960 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0419 18:59:09.806271   14960 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0419 18:59:09.806333   14960 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0419 18:59:09.806333   14960 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0419 18:59:09.806333   14960 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0419 18:59:09.806333   14960 command_runner.go:130] >   kube-system                 kindnet-mg8qs       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0419 18:59:09.806410   14960 command_runner.go:130] >   kube-system                 kube-proxy-2jjsq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0419 18:59:09.806410   14960 command_runner.go:130] > Allocated resources:
	I0419 18:59:09.806410   14960 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0419 18:59:09.806410   14960 command_runner.go:130] >   Resource           Requests   Limits
	I0419 18:59:09.806410   14960 command_runner.go:130] >   --------           --------   ------
	I0419 18:59:09.806410   14960 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0419 18:59:09.806469   14960 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0419 18:59:09.806469   14960 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0419 18:59:09.806469   14960 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0419 18:59:09.806469   14960 command_runner.go:130] > Events:
	I0419 18:59:09.806469   14960 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0419 18:59:09.806536   14960 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0419 18:59:09.806536   14960 command_runner.go:130] >   Normal  Starting                 5m37s                  kube-proxy       
	I0419 18:59:09.806536   14960 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0419 18:59:09.806595   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:09.806595   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientMemory
	I0419 18:59:09.806674   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-348000-m03 status is now: NodeHasNoDiskPressure
	I0419 18:59:09.806674   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientPID
	I0419 18:59:09.806674   14960 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-348000-m03 status is now: NodeReady
	I0419 18:59:09.806674   14960 command_runner.go:130] >   Normal  Starting                 5m41s                  kubelet          Starting kubelet.
	I0419 18:59:09.806737   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m41s (x2 over 5m41s)  kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientMemory
	I0419 18:59:09.806737   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m41s (x2 over 5m41s)  kubelet          Node multinode-348000-m03 status is now: NodeHasNoDiskPressure
	I0419 18:59:09.806737   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m41s (x2 over 5m41s)  kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientPID
	I0419 18:59:09.806805   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m41s                  kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:09.806805   14960 command_runner.go:130] >   Normal  RegisteredNode           5m37s                  node-controller  Node multinode-348000-m03 event: Registered Node multinode-348000-m03 in Controller
	I0419 18:59:09.806805   14960 command_runner.go:130] >   Normal  NodeReady                5m33s                  kubelet          Node multinode-348000-m03 status is now: NodeReady
	I0419 18:59:09.806874   14960 command_runner.go:130] >   Normal  NodeNotReady             3m56s                  node-controller  Node multinode-348000-m03 status is now: NodeNotReady
	I0419 18:59:09.806874   14960 command_runner.go:130] >   Normal  RegisteredNode           56s                    node-controller  Node multinode-348000-m03 event: Registered Node multinode-348000-m03 in Controller
	I0419 18:59:09.817673   14960 logs.go:123] Gathering logs for etcd [2deabe4dbdf4] ...
	I0419 18:59:09.817673   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2deabe4dbdf4"
	I0419 18:59:09.859066   14960 command_runner.go:130] ! {"level":"warn","ts":"2024-04-20T01:57:57.046906Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0419 18:59:09.860059   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.051203Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.19.42.24:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.19.42.24:2380","--initial-cluster=multinode-348000=https://172.19.42.24:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.19.42.24:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.19.42.24:2380","--name=multinode-348000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-ref
resh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0419 18:59:09.860119   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.05132Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0419 18:59:09.860235   14960 command_runner.go:130] ! {"level":"warn","ts":"2024-04-20T01:57:57.053068Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0419 18:59:09.860235   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.053085Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.19.42.24:2380"]}
	I0419 18:59:09.860295   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.053402Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0419 18:59:09.860347   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.06821Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.19.42.24:2379"]}
	I0419 18:59:09.860481   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.071769Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-348000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.19.42.24:2380"],"listen-peer-urls":["https://172.19.42.24:2380"],"advertise-client-urls":["https://172.19.42.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.42.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cl
uster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0419 18:59:09.860549   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.117145Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"37.959314ms"}
	I0419 18:59:09.860549   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.163657Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0419 18:59:09.860549   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186114Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","commit-index":1996}
	I0419 18:59:09.860615   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c switched to configuration voters=()"}
	I0419 18:59:09.860673   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became follower at term 2"}
	I0419 18:59:09.860673   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 4fba18389b33806c [peers: [], term: 2, commit: 1996, applied: 0, lastindex: 1996, lastterm: 2]"}
	I0419 18:59:09.860673   14960 command_runner.go:130] ! {"level":"warn","ts":"2024-04-20T01:57:57.204366Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0419 18:59:09.860741   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.210889Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1364}
	I0419 18:59:09.860741   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.22333Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1726}
	I0419 18:59:09.860741   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.233905Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0419 18:59:09.860811   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.247902Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"4fba18389b33806c","timeout":"7s"}
	I0419 18:59:09.860811   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.252957Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"4fba18389b33806c"}
	I0419 18:59:09.860879   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.253239Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"4fba18389b33806c","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0419 18:59:09.860879   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.257675Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0419 18:59:09.860879   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.259962Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0419 18:59:09.860963   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.260237Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0419 18:59:09.860963   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.26046Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0419 18:59:09.860963   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c switched to configuration voters=(5744930906065567852)"}
	I0419 18:59:09.861029   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264281Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","added-peer-id":"4fba18389b33806c","added-peer-peer-urls":["https://172.19.42.231:2380"]}
	I0419 18:59:09.861098   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264439Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","cluster-version":"3.5"}
	I0419 18:59:09.861098   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264612Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.271976Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.273753Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4fba18389b33806c","initial-advertise-peer-urls":["https://172.19.42.24:2380"],"listen-peer-urls":["https://172.19.42.24:2380"],"advertise-client-urls":["https://172.19.42.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.42.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.27526Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.27622Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.42.24:2380"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.277207Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.42.24:2380"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c is starting a new election at term 2"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became pre-candidate at term 2"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c received MsgPreVoteResp from 4fba18389b33806c at term 2"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became candidate at term 3"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c received MsgVoteResp from 4fba18389b33806c at term 3"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became leader at term 3"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4fba18389b33806c elected leader 4fba18389b33806c at term 3"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.994477Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4fba18389b33806c","local-member-attributes":"{Name:multinode-348000 ClientURLs:[https://172.19.42.24:2379]}","request-path":"/0/members/4fba18389b33806c/attributes","cluster-id":"dca2ede42d67bc1c","publish-timeout":"7s"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.994493Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.994512Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.996572Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.996617Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.999043Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.42.24:2379"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.999341Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0419 18:59:09.869988   14960 logs.go:123] Gathering logs for kube-controller-manager [9638ddcd5428] ...
	I0419 18:59:09.869988   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9638ddcd5428"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:03.372734       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:03.812267       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:03.812307       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:03.816347       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:03.816460       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:03.817145       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:03.817250       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:07.961997       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:07.962027       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:07.977942       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:07.978602       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:07.980093       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:07.989698       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:07.990033       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:07.990321       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:08.005238       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:08.005791       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:08.006985       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:08.018816       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:08.019229       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:08.019480       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:08.046904       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:08.047815       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:08.049696       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0419 18:59:09.911485   14960 command_runner.go:130] ! I0420 01:35:08.050007       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0419 18:59:09.911485   14960 command_runner.go:130] ! I0420 01:35:08.062049       1 shared_informer.go:320] Caches are synced for tokens
	I0419 18:59:09.911485   14960 command_runner.go:130] ! I0420 01:35:08.065356       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0419 18:59:09.911485   14960 command_runner.go:130] ! I0420 01:35:08.065873       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0419 18:59:09.911485   14960 command_runner.go:130] ! I0420 01:35:08.113476       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0419 18:59:09.911485   14960 command_runner.go:130] ! I0420 01:35:08.114130       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0419 18:59:09.912536   14960 command_runner.go:130] ! I0420 01:35:08.116086       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0419 18:59:09.912536   14960 command_runner.go:130] ! I0420 01:35:08.129157       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0419 18:59:09.912536   14960 command_runner.go:130] ! I0420 01:35:08.129533       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0419 18:59:09.912536   14960 command_runner.go:130] ! I0420 01:35:08.129568       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0419 18:59:09.912536   14960 command_runner.go:130] ! I0420 01:35:08.165596       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0419 18:59:09.912856   14960 command_runner.go:130] ! I0420 01:35:08.166223       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0419 18:59:09.913433   14960 command_runner.go:130] ! I0420 01:35:08.166242       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0419 18:59:09.913533   14960 command_runner.go:130] ! I0420 01:35:08.211668       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0419 18:59:09.914336   14960 command_runner.go:130] ! I0420 01:35:08.211749       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0419 18:59:09.914473   14960 command_runner.go:130] ! I0420 01:35:08.211766       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0419 18:59:09.914473   14960 command_runner.go:130] ! I0420 01:35:08.232421       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:09.914473   14960 command_runner.go:130] ! I0420 01:35:08.232496       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0419 18:59:09.914541   14960 command_runner.go:130] ! I0420 01:35:08.232934       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:09.914541   14960 command_runner.go:130] ! I0420 01:35:08.232991       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0419 18:59:09.914541   14960 command_runner.go:130] ! I0420 01:35:08.502058       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0419 18:59:09.914541   14960 command_runner.go:130] ! I0420 01:35:08.502113       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0419 18:59:09.914622   14960 command_runner.go:130] ! W0420 01:35:08.502140       1 shared_informer.go:597] resyncPeriod 21h44m16.388395173s is smaller than resyncCheckPeriod 22h35m59.940993284s and the informer has already started. Changing it to 22h35m59.940993284s
	I0419 18:59:09.914622   14960 command_runner.go:130] ! I0420 01:35:08.502208       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0419 18:59:09.914622   14960 command_runner.go:130] ! I0420 01:35:08.502278       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0419 18:59:09.914702   14960 command_runner.go:130] ! I0420 01:35:08.502298       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0419 18:59:09.914702   14960 command_runner.go:130] ! I0420 01:35:08.502314       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0419 18:59:09.914702   14960 command_runner.go:130] ! I0420 01:35:08.502330       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0419 18:59:09.914702   14960 command_runner.go:130] ! I0420 01:35:08.502351       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0419 18:59:09.914781   14960 command_runner.go:130] ! I0420 01:35:08.502407       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0419 18:59:09.914781   14960 command_runner.go:130] ! I0420 01:35:08.502437       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0419 18:59:09.914781   14960 command_runner.go:130] ! I0420 01:35:08.502458       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0419 18:59:09.914859   14960 command_runner.go:130] ! I0420 01:35:08.502479       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0419 18:59:09.914859   14960 command_runner.go:130] ! I0420 01:35:08.502501       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0419 18:59:09.914938   14960 command_runner.go:130] ! W0420 01:35:08.502514       1 shared_informer.go:597] resyncPeriod 19h4m59.465157498s is smaller than resyncCheckPeriod 22h35m59.940993284s and the informer has already started. Changing it to 22h35m59.940993284s
	I0419 18:59:09.914938   14960 command_runner.go:130] ! I0420 01:35:08.502638       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0419 18:59:09.914938   14960 command_runner.go:130] ! I0420 01:35:08.502666       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0419 18:59:09.915016   14960 command_runner.go:130] ! I0420 01:35:08.502684       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0419 18:59:09.915016   14960 command_runner.go:130] ! I0420 01:35:08.502713       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0419 18:59:09.915016   14960 command_runner.go:130] ! I0420 01:35:08.502732       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0419 18:59:09.915094   14960 command_runner.go:130] ! I0420 01:35:08.502771       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0419 18:59:09.915094   14960 command_runner.go:130] ! I0420 01:35:08.502793       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0419 18:59:09.915094   14960 command_runner.go:130] ! I0420 01:35:08.502820       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0419 18:59:09.915186   14960 command_runner.go:130] ! I0420 01:35:08.503928       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0419 18:59:09.915186   14960 command_runner.go:130] ! I0420 01:35:08.503949       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:09.915186   14960 command_runner.go:130] ! I0420 01:35:08.504053       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0419 18:59:09.915186   14960 command_runner.go:130] ! I0420 01:35:08.534828       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0419 18:59:09.915364   14960 command_runner.go:130] ! I0420 01:35:08.534961       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0419 18:59:09.915364   14960 command_runner.go:130] ! I0420 01:35:08.674769       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0419 18:59:09.915483   14960 command_runner.go:130] ! I0420 01:35:08.675139       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0419 18:59:09.915483   14960 command_runner.go:130] ! I0420 01:35:08.675159       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0419 18:59:09.915483   14960 command_runner.go:130] ! I0420 01:35:08.825012       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0419 18:59:09.915483   14960 command_runner.go:130] ! I0420 01:35:08.825352       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0419 18:59:09.915595   14960 command_runner.go:130] ! I0420 01:35:08.825549       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0419 18:59:09.915595   14960 command_runner.go:130] ! I0420 01:35:09.067591       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0419 18:59:09.915595   14960 command_runner.go:130] ! I0420 01:35:09.068206       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0419 18:59:09.915654   14960 command_runner.go:130] ! I0420 01:35:09.068502       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:09.915654   14960 command_runner.go:130] ! I0420 01:35:09.068578       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0419 18:59:09.915654   14960 command_runner.go:130] ! I0420 01:35:09.320310       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0419 18:59:09.915654   14960 command_runner.go:130] ! I0420 01:35:09.320746       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0419 18:59:09.915654   14960 command_runner.go:130] ! I0420 01:35:09.321134       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0419 18:59:09.915654   14960 command_runner.go:130] ! I0420 01:35:09.516184       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0419 18:59:09.915654   14960 command_runner.go:130] ! I0420 01:35:09.516262       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0419 18:59:09.915654   14960 command_runner.go:130] ! I0420 01:35:09.691568       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0419 18:59:09.915774   14960 command_runner.go:130] ! I0420 01:35:09.693516       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0419 18:59:09.915774   14960 command_runner.go:130] ! I0420 01:35:09.693713       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0419 18:59:09.915774   14960 command_runner.go:130] ! I0420 01:35:09.694525       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0419 18:59:09.915774   14960 command_runner.go:130] ! I0420 01:35:09.933130       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0419 18:59:09.915774   14960 command_runner.go:130] ! I0420 01:35:09.933168       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0419 18:59:09.915774   14960 command_runner.go:130] ! I0420 01:35:09.936074       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0419 18:59:09.915774   14960 command_runner.go:130] ! I0420 01:35:10.217647       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0419 18:59:09.915866   14960 command_runner.go:130] ! I0420 01:35:10.218375       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0419 18:59:09.915866   14960 command_runner.go:130] ! I0420 01:35:10.218475       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0419 18:59:09.915907   14960 command_runner.go:130] ! I0420 01:35:10.267124       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0419 18:59:09.915907   14960 command_runner.go:130] ! I0420 01:35:10.267436       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0419 18:59:09.915907   14960 command_runner.go:130] ! I0420 01:35:10.267570       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0419 18:59:09.915907   14960 command_runner.go:130] ! I0420 01:35:10.268204       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0419 18:59:09.915907   14960 command_runner.go:130] ! I0420 01:35:10.268422       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0419 18:59:09.915907   14960 command_runner.go:130] ! E0420 01:35:10.316394       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0419 18:59:09.915907   14960 command_runner.go:130] ! I0420 01:35:10.316683       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0419 18:59:09.916006   14960 command_runner.go:130] ! I0420 01:35:10.472792       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0419 18:59:09.916006   14960 command_runner.go:130] ! I0420 01:35:10.472905       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0419 18:59:09.916006   14960 command_runner.go:130] ! I0420 01:35:10.472918       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0419 18:59:09.916006   14960 command_runner.go:130] ! I0420 01:35:10.624680       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0419 18:59:09.916006   14960 command_runner.go:130] ! I0420 01:35:10.624742       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0419 18:59:09.916006   14960 command_runner.go:130] ! I0420 01:35:10.624753       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0419 18:59:09.916006   14960 command_runner.go:130] ! I0420 01:35:10.772273       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0419 18:59:09.916122   14960 command_runner.go:130] ! I0420 01:35:10.772422       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0419 18:59:09.916122   14960 command_runner.go:130] ! I0420 01:35:10.773389       1 shared_informer.go:313] Waiting for caches to sync for job
	I0419 18:59:09.916122   14960 command_runner.go:130] ! I0420 01:35:10.922317       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0419 18:59:09.916122   14960 command_runner.go:130] ! I0420 01:35:10.922464       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0419 18:59:09.916122   14960 command_runner.go:130] ! I0420 01:35:10.922478       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0419 18:59:09.916122   14960 command_runner.go:130] ! I0420 01:35:11.070777       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0419 18:59:09.916122   14960 command_runner.go:130] ! I0420 01:35:11.071059       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0419 18:59:09.916122   14960 command_runner.go:130] ! I0420 01:35:11.071119       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0419 18:59:09.916253   14960 command_runner.go:130] ! I0420 01:35:11.071166       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0419 18:59:09.916253   14960 command_runner.go:130] ! I0420 01:35:11.071195       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0419 18:59:09.916253   14960 command_runner.go:130] ! I0420 01:35:11.071205       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0419 18:59:09.916253   14960 command_runner.go:130] ! I0420 01:35:11.222012       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0419 18:59:09.916253   14960 command_runner.go:130] ! I0420 01:35:11.222056       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0419 18:59:09.916253   14960 command_runner.go:130] ! I0420 01:35:11.222746       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0419 18:59:09.916253   14960 command_runner.go:130] ! I0420 01:35:11.372624       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0419 18:59:09.916361   14960 command_runner.go:130] ! I0420 01:35:11.372812       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0419 18:59:09.916361   14960 command_runner.go:130] ! I0420 01:35:11.372965       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0419 18:59:09.916361   14960 command_runner.go:130] ! I0420 01:35:11.522757       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0419 18:59:09.916361   14960 command_runner.go:130] ! I0420 01:35:11.522983       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0419 18:59:09.916361   14960 command_runner.go:130] ! I0420 01:35:11.523000       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0419 18:59:09.916361   14960 command_runner.go:130] ! I0420 01:35:11.671210       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0419 18:59:09.916516   14960 command_runner.go:130] ! I0420 01:35:11.671410       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0419 18:59:09.916516   14960 command_runner.go:130] ! I0420 01:35:11.671429       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0419 18:59:09.916516   14960 command_runner.go:130] ! I0420 01:35:11.820688       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0419 18:59:09.916516   14960 command_runner.go:130] ! I0420 01:35:11.821596       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0419 18:59:09.916516   14960 command_runner.go:130] ! I0420 01:35:11.821935       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0419 18:59:09.916516   14960 command_runner.go:130] ! E0420 01:35:11.971137       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0419 18:59:09.916516   14960 command_runner.go:130] ! I0420 01:35:11.971301       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0419 18:59:09.916637   14960 command_runner.go:130] ! I0420 01:35:11.971316       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0419 18:59:09.916637   14960 command_runner.go:130] ! I0420 01:35:11.971323       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0419 18:59:09.916637   14960 command_runner.go:130] ! I0420 01:35:12.121255       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0419 18:59:09.916682   14960 command_runner.go:130] ! I0420 01:35:12.121746       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0419 18:59:09.916682   14960 command_runner.go:130] ! I0420 01:35:12.121947       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0419 18:59:09.916682   14960 command_runner.go:130] ! I0420 01:35:12.274169       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0419 18:59:09.916682   14960 command_runner.go:130] ! I0420 01:35:12.274383       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0419 18:59:09.916682   14960 command_runner.go:130] ! I0420 01:35:12.274402       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0419 18:59:09.916740   14960 command_runner.go:130] ! I0420 01:35:12.318009       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0419 18:59:09.916740   14960 command_runner.go:130] ! I0420 01:35:12.318126       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0419 18:59:09.916740   14960 command_runner.go:130] ! I0420 01:35:12.318164       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:09.916806   14960 command_runner.go:130] ! I0420 01:35:12.318524       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0419 18:59:09.916806   14960 command_runner.go:130] ! I0420 01:35:12.318628       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0419 18:59:09.916806   14960 command_runner.go:130] ! I0420 01:35:12.318650       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:09.916865   14960 command_runner.go:130] ! I0420 01:35:12.319568       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0419 18:59:09.916865   14960 command_runner.go:130] ! I0420 01:35:12.319800       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:09.916865   14960 command_runner.go:130] ! I0420 01:35:12.319996       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0419 18:59:09.916939   14960 command_runner.go:130] ! I0420 01:35:12.320096       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0419 18:59:09.916939   14960 command_runner.go:130] ! I0420 01:35:12.320128       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0419 18:59:09.916939   14960 command_runner.go:130] ! I0420 01:35:12.320161       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:09.916939   14960 command_runner.go:130] ! I0420 01:35:12.320270       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:09.917004   14960 command_runner.go:130] ! I0420 01:35:22.381189       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0419 18:59:09.917004   14960 command_runner.go:130] ! I0420 01:35:22.381256       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0419 18:59:09.917004   14960 command_runner.go:130] ! I0420 01:35:22.381472       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0419 18:59:09.917004   14960 command_runner.go:130] ! I0420 01:35:22.381508       1 shared_informer.go:313] Waiting for caches to sync for node
	I0419 18:59:09.917069   14960 command_runner.go:130] ! I0420 01:35:22.395580       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0419 18:59:09.917069   14960 command_runner.go:130] ! I0420 01:35:22.395660       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0419 18:59:09.917126   14960 command_runner.go:130] ! I0420 01:35:22.396587       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0419 18:59:09.917126   14960 command_runner.go:130] ! I0420 01:35:22.396886       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0419 18:59:09.917126   14960 command_runner.go:130] ! I0420 01:35:22.405182       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:09.917126   14960 command_runner.go:130] ! I0420 01:35:22.428741       1 shared_informer.go:320] Caches are synced for service account
	I0419 18:59:09.917208   14960 command_runner.go:130] ! I0420 01:35:22.430037       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0419 18:59:09.917208   14960 command_runner.go:130] ! I0420 01:35:22.433041       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0419 18:59:09.917208   14960 command_runner.go:130] ! I0420 01:35:22.440027       1 shared_informer.go:320] Caches are synced for namespace
	I0419 18:59:09.917265   14960 command_runner.go:130] ! I0420 01:35:22.466474       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:09.917265   14960 command_runner.go:130] ! I0420 01:35:22.469554       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0419 18:59:09.917265   14960 command_runner.go:130] ! I0420 01:35:22.477923       1 shared_informer.go:320] Caches are synced for PV protection
	I0419 18:59:09.917265   14960 command_runner.go:130] ! I0420 01:35:22.479748       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0419 18:59:09.917265   14960 command_runner.go:130] ! I0420 01:35:22.479794       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0419 18:59:09.917327   14960 command_runner.go:130] ! I0420 01:35:22.480700       1 shared_informer.go:320] Caches are synced for PVC protection
	I0419 18:59:09.917327   14960 command_runner.go:130] ! I0420 01:35:22.492034       1 shared_informer.go:320] Caches are synced for expand
	I0419 18:59:09.917327   14960 command_runner.go:130] ! I0420 01:35:22.492084       1 shared_informer.go:320] Caches are synced for endpoint
	I0419 18:59:09.917327   14960 command_runner.go:130] ! I0420 01:35:22.492130       1 shared_informer.go:320] Caches are synced for job
	I0419 18:59:09.917383   14960 command_runner.go:130] ! I0420 01:35:22.497920       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0419 18:59:09.917383   14960 command_runner.go:130] ! I0420 01:35:22.498399       1 shared_informer.go:320] Caches are synced for node
	I0419 18:59:09.917435   14960 command_runner.go:130] ! I0420 01:35:22.498473       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0419 18:59:09.917435   14960 command_runner.go:130] ! I0420 01:35:22.498515       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0419 18:59:09.917435   14960 command_runner.go:130] ! I0420 01:35:22.498526       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0419 18:59:09.917475   14960 command_runner.go:130] ! I0420 01:35:22.498531       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0419 18:59:09.917475   14960 command_runner.go:130] ! I0420 01:35:22.508187       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000\" does not exist"
	I0419 18:59:09.917520   14960 command_runner.go:130] ! I0420 01:35:22.508396       1 shared_informer.go:320] Caches are synced for GC
	I0419 18:59:09.917520   14960 command_runner.go:130] ! I0420 01:35:22.512585       1 shared_informer.go:320] Caches are synced for crt configmap
	I0419 18:59:09.917520   14960 command_runner.go:130] ! I0420 01:35:22.520820       1 shared_informer.go:320] Caches are synced for daemon sets
	I0419 18:59:09.917520   14960 command_runner.go:130] ! I0420 01:35:22.521073       1 shared_informer.go:320] Caches are synced for stateful set
	I0419 18:59:09.917585   14960 command_runner.go:130] ! I0420 01:35:22.521189       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0419 18:59:09.917585   14960 command_runner.go:130] ! I0420 01:35:22.521223       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0419 18:59:09.917585   14960 command_runner.go:130] ! I0420 01:35:22.521268       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0419 18:59:09.917585   14960 command_runner.go:130] ! I0420 01:35:22.527709       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0419 18:59:09.917648   14960 command_runner.go:130] ! I0420 01:35:22.528722       1 shared_informer.go:320] Caches are synced for cronjob
	I0419 18:59:09.917648   14960 command_runner.go:130] ! I0420 01:35:22.528751       1 shared_informer.go:320] Caches are synced for ephemeral
	I0419 18:59:09.917648   14960 command_runner.go:130] ! I0420 01:35:22.528767       1 shared_informer.go:320] Caches are synced for TTL
	I0419 18:59:09.917648   14960 command_runner.go:130] ! I0420 01:35:22.529370       1 shared_informer.go:320] Caches are synced for HPA
	I0419 18:59:09.917706   14960 command_runner.go:130] ! I0420 01:35:22.529414       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0419 18:59:09.917706   14960 command_runner.go:130] ! I0420 01:35:22.529477       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:09.917706   14960 command_runner.go:130] ! I0420 01:35:22.529509       1 shared_informer.go:320] Caches are synced for persistent volume
	I0419 18:59:09.917706   14960 command_runner.go:130] ! I0420 01:35:22.552273       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000" podCIDRs=["10.244.0.0/24"]
	I0419 18:59:09.917768   14960 command_runner.go:130] ! I0420 01:35:22.569198       1 shared_informer.go:320] Caches are synced for taint
	I0419 18:59:09.917768   14960 command_runner.go:130] ! I0420 01:35:22.569287       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0419 18:59:09.917768   14960 command_runner.go:130] ! I0420 01:35:22.569354       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000"
	I0419 18:59:09.917828   14960 command_runner.go:130] ! I0420 01:35:22.569429       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0419 18:59:09.917828   14960 command_runner.go:130] ! I0420 01:35:22.574991       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0419 18:59:09.917828   14960 command_runner.go:130] ! I0420 01:35:22.590559       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0419 18:59:09.917828   14960 command_runner.go:130] ! I0420 01:35:22.623057       1 shared_informer.go:320] Caches are synced for deployment
	I0419 18:59:09.917888   14960 command_runner.go:130] ! I0420 01:35:22.623597       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0419 18:59:09.917888   14960 command_runner.go:130] ! I0420 01:35:22.651041       1 shared_informer.go:320] Caches are synced for disruption
	I0419 18:59:09.917888   14960 command_runner.go:130] ! I0420 01:35:22.699011       1 shared_informer.go:320] Caches are synced for attach detach
	I0419 18:59:09.917888   14960 command_runner.go:130] ! I0420 01:35:22.705303       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:09.917954   14960 command_runner.go:130] ! I0420 01:35:22.706815       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:09.917954   14960 command_runner.go:130] ! I0420 01:35:23.168892       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:09.917954   14960 command_runner.go:130] ! I0420 01:35:23.169115       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0419 18:59:09.917954   14960 command_runner.go:130] ! I0420 01:35:23.179171       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:09.917954   14960 command_runner.go:130] ! I0420 01:35:23.263116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="374.4156ms"
	I0419 18:59:09.917954   14960 command_runner.go:130] ! I0420 01:35:23.291471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.172623ms"
	I0419 18:59:09.918034   14960 command_runner.go:130] ! I0420 01:35:23.291547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.106µs"
	I0419 18:59:09.918034   14960 command_runner.go:130] ! I0420 01:35:23.578182       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="73.803114ms"
	I0419 18:59:09.918106   14960 command_runner.go:130] ! I0420 01:35:23.630233       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.666311ms"
	I0419 18:59:09.918106   14960 command_runner.go:130] ! I0420 01:35:23.630467       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="183.125µs"
	I0419 18:59:09.918106   14960 command_runner.go:130] ! I0420 01:35:36.906373       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="291.116µs"
	I0419 18:59:09.918106   14960 command_runner.go:130] ! I0420 01:35:36.934151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="76.104µs"
	I0419 18:59:09.918185   14960 command_runner.go:130] ! I0420 01:35:37.573034       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0419 18:59:09.918223   14960 command_runner.go:130] ! I0420 01:35:39.217159       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.488µs"
	I0419 18:59:09.918223   14960 command_runner.go:130] ! I0420 01:35:39.265403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.862669ms"
	I0419 18:59:09.918223   14960 command_runner.go:130] ! I0420 01:35:39.266023       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="552.786µs"
	I0419 18:59:09.918265   14960 command_runner.go:130] ! I0420 01:38:18.575680       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m02\" does not exist"
	I0419 18:59:09.918265   14960 command_runner.go:130] ! I0420 01:38:18.590900       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m02" podCIDRs=["10.244.1.0/24"]
	I0419 18:59:09.918326   14960 command_runner.go:130] ! I0420 01:38:22.613051       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m02"
	I0419 18:59:09.918326   14960 command_runner.go:130] ! I0420 01:38:37.669535       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.918326   14960 command_runner.go:130] ! I0420 01:39:03.031296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.090021ms"
	I0419 18:59:09.918385   14960 command_runner.go:130] ! I0420 01:39:03.053897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.363721ms"
	I0419 18:59:09.918385   14960 command_runner.go:130] ! I0420 01:39:03.054543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.499µs"
	I0419 18:59:09.918385   14960 command_runner.go:130] ! I0420 01:39:05.783927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.434034ms"
	I0419 18:59:09.918440   14960 command_runner.go:130] ! I0420 01:39:05.784276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="108.901µs"
	I0419 18:59:09.918440   14960 command_runner.go:130] ! I0420 01:39:07.103598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.163039ms"
	I0419 18:59:09.918440   14960 command_runner.go:130] ! I0420 01:39:07.104054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.4µs"
	I0419 18:59:09.918440   14960 command_runner.go:130] ! I0420 01:42:52.390190       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.918502   14960 command_runner.go:130] ! I0420 01:42:52.390530       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m03\" does not exist"
	I0419 18:59:09.918502   14960 command_runner.go:130] ! I0420 01:42:52.403944       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m03" podCIDRs=["10.244.2.0/24"]
	I0419 18:59:09.918565   14960 command_runner.go:130] ! I0420 01:42:52.676079       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m03"
	I0419 18:59:09.918565   14960 command_runner.go:130] ! I0420 01:43:11.211743       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.918565   14960 command_runner.go:130] ! I0420 01:50:42.818871       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.918683   14960 command_runner.go:130] ! I0420 01:53:22.621370       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.918683   14960 command_runner.go:130] ! I0420 01:53:28.752017       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m03\" does not exist"
	I0419 18:59:09.918747   14960 command_runner.go:130] ! I0420 01:53:28.753300       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.918747   14960 command_runner.go:130] ! I0420 01:53:28.789161       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m03" podCIDRs=["10.244.3.0/24"]
	I0419 18:59:09.918799   14960 command_runner.go:130] ! I0420 01:53:36.097701       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m03"
	I0419 18:59:09.918799   14960 command_runner.go:130] ! I0420 01:55:13.205537       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.942596   14960 logs.go:123] Gathering logs for Docker ...
	I0419 18:59:09.942596   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 18:59:09.976592   14960 command_runner.go:130] > Apr 20 01:56:27 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:09.976592   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:09.976694   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:09.976694   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:09.976793   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0419 18:59:09.976793   14960 command_runner.go:130] > Apr 20 01:56:28 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:09.976793   14960 command_runner.go:130] > Apr 20 01:56:28 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:09.976896   14960 command_runner.go:130] > Apr 20 01:56:28 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:09.976896   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0419 18:59:09.976896   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0419 18:59:09.977006   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:09.977006   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:09.977109   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:09.977109   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:09.977109   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0419 18:59:09.977211   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:09.977211   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:09.977317   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:09.977317   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0419 18:59:09.977317   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0419 18:59:09.977423   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:09.977423   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:09.977423   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:09.977523   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:09.977523   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0419 18:59:09.977622   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:09.977622   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:09.977622   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:09.977723   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0419 18:59:09.977723   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0419 18:59:09.977826   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0419 18:59:09.977883   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:09.977936   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:09.977936   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 systemd[1]: Starting Docker Application Container Engine...
	I0419 18:59:09.978016   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[657]: time="2024-04-20T01:57:18.710176447Z" level=info msg="Starting up"
	I0419 18:59:09.978064   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[657]: time="2024-04-20T01:57:18.711651787Z" level=info msg="containerd not running, starting managed containerd"
	I0419 18:59:09.978150   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[657]: time="2024-04-20T01:57:18.716746379Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=664
	I0419 18:59:09.978150   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.747165139Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0419 18:59:09.978253   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778478063Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0419 18:59:09.978253   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778645056Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0419 18:59:09.978253   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778743452Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0419 18:59:09.978356   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778860747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.978356   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.780842867Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:09.978458   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.780950062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.978574   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781281849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:09.978574   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781381945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.978674   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781405744Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0419 18:59:09.978674   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781418543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.978772   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781890324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.978772   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.782561296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.978873   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786065554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:09.978873   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786174049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.978972   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786324143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:09.978972   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786418639Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0419 18:59:09.979076   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.787110911Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0419 18:59:09.979076   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.787239206Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0419 18:59:09.979176   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.787257405Z" level=info msg="metadata content store policy set" policy=shared
	I0419 18:59:09.979176   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794203322Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0419 18:59:09.979272   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794271219Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0419 18:59:09.979272   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794292218Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0419 18:59:09.979272   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794308818Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0419 18:59:09.979375   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794325217Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0419 18:59:09.979375   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794399514Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0419 18:59:09.979473   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794805397Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0419 18:59:09.979473   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795021089Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0419 18:59:09.979572   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795123284Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0419 18:59:09.979572   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795209281Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0419 18:59:09.979671   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795227280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.979671   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795252079Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.979770   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795270178Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.979822   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795305177Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.979822   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795321176Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.979910   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795336476Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.979910   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795368674Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.980009   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795383074Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.980009   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795405873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980009   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795423972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980109   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795438172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980109   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795453671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980222   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795468970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980318   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795483970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980318   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795576866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980415   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795594465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980415   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795610465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980515   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795628364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980515   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795642863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980515   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795657163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980649   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795671762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980649   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795713760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0419 18:59:09.980774   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795756259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980774   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795811856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980911   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795843255Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0419 18:59:09.980911   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795920052Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0419 18:59:09.981019   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795944151Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0419 18:59:09.981019   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796175542Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0419 18:59:09.981217   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796194141Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0419 18:59:09.981217   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796263238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.981330   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796305336Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0419 18:59:09.981330   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796319336Z" level=info msg="NRI interface is disabled by configuration."
	I0419 18:59:09.981330   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.797416591Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0419 18:59:09.981453   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.797499188Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0419 18:59:09.981453   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.797659381Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0419 18:59:09.981553   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.798178860Z" level=info msg="containerd successfully booted in 0.054054s"
	I0419 18:59:09.981553   14960 command_runner.go:130] > Apr 20 01:57:19 multinode-348000 dockerd[657]: time="2024-04-20T01:57:19.782299514Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0419 18:59:09.981656   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.015692930Z" level=info msg="Loading containers: start."
	I0419 18:59:09.981656   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.458486133Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0419 18:59:09.981794   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.551244732Z" level=info msg="Loading containers: done."
	I0419 18:59:09.981794   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.579065252Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	I0419 18:59:09.981794   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.579904847Z" level=info msg="Daemon has completed initialization"
	I0419 18:59:09.981898   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.637363974Z" level=info msg="API listen on [::]:2376"
	I0419 18:59:09.981898   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 systemd[1]: Started Docker Application Container Engine.
	I0419 18:59:09.982018   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.639403561Z" level=info msg="API listen on /var/run/docker.sock"
	I0419 18:59:09.982018   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.472939019Z" level=info msg="Processing signal 'terminated'"
	I0419 18:59:09.982018   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 systemd[1]: Stopping Docker Application Container Engine...
	I0419 18:59:09.982133   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.475778002Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0419 18:59:09.982133   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.476696029Z" level=info msg="Daemon shutdown complete"
	I0419 18:59:09.982237   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.476992338Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0419 18:59:09.982285   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.477157542Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0419 18:59:09.982320   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 systemd[1]: docker.service: Deactivated successfully.
	I0419 18:59:09.982320   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 systemd[1]: Stopped Docker Application Container Engine.
	I0419 18:59:09.982407   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 systemd[1]: Starting Docker Application Container Engine...
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:47.551071055Z" level=info msg="Starting up"
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:47.552229889Z" level=info msg="containerd not running, starting managed containerd"
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:47.555196776Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1058
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.593728507Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623742487Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623851391Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623939793Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623957394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624003795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624024296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624225802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624329205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624352205Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624363806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624391206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624622913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.627825907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.627876709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628096615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628227419Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628259620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628280321Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628292621Z" level=info msg="metadata content store policy set" policy=shared
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628514127Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0419 18:59:09.982991   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628716033Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0419 18:59:09.982991   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628764035Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0419 18:59:09.982991   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628783935Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0419 18:59:09.982991   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628872138Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0419 18:59:09.982991   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628938240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0419 18:59:09.983163   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.629513057Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.629754764Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.629936569Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630060973Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630086474Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630105074Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630122275Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630140375Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630157976Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630174076Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630191277Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630206077Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630234378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630252178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630267579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630283379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630298980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630314780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630328781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630360082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630377682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630410083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630423583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630455984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630487185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630505186Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630528987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983747   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630643490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983747   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630666391Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0419 18:59:09.983747   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630895497Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0419 18:59:09.983747   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630922398Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0419 18:59:09.983747   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630934798Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0419 18:59:09.983747   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630945799Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0419 18:59:09.983971   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.631020001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.984090   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.631067102Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0419 18:59:09.984163   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.631083303Z" level=info msg="NRI interface is disabled by configuration."
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632230736Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632319639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632396541Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632594347Z" level=info msg="containerd successfully booted in 0.042627s"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:48 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:48.604760074Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:48 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:48.637031921Z" level=info msg="Loading containers: start."
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:48 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:48.936729515Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.021589305Z" level=info msg="Loading containers: done."
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.048182786Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.048316590Z" level=info msg="Daemon has completed initialization"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.095567976Z" level=info msg="API listen on /var/run/docker.sock"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 systemd[1]: Started Docker Application Container Engine.
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.098304756Z" level=info msg="API listen on [::]:2376"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Loaded network plugin cni"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Start cri-dockerd grpc backend"
	I0419 18:59:09.984744   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0419 18:59:09.984744   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-xnz2k_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"476e3efb38684054cbc21c027cf1ddd3f9ca47bb829786f8636fd877fd4b2f81\""
	I0419 18:59:09.984744   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-7w477_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"2dd294415aae178d6b9bed0368d49bedc6d0afa8f5b9ad0011c73ffcb2c24b3c\""
	I0419 18:59:09.985013   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.930297132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.985069   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.930785146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.985241   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.930860749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985300   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.931659072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985349   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002064338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.985401   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002134840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.985497   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002149541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985544   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002292345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985599   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e8baa597c1467ae8c3a1ce9abf0a378ddcffed5a93f7b41dddb4ce4511320dfd/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:09.985650   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151299517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151377019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151407720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151504323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169004837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169190142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169211543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169324146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/118cca57d1f547838d0c2442f2945e9daf9b041170bf162489525286bf3d75c2/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7052a6f04def38545970026f2934eb29913066396b26eb86f6675e7c0c685db/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ab9ff1d9068805d6a2ad10084128436e5b1fcaaa8c64f2f1a5e811455f0f99ee/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441120322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441388229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441493933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441783141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.541538868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.541743874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.541768275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.542244089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.635958239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.636305549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.986319   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.636479754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.986319   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.636776363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.986319   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.703176711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.986319   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.703241613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.986319   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.703253713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.986555   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.704949863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.986621   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:00Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0419 18:59:09.986672   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.682944236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.986730   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.683066839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.986781   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.683087340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.986835   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.683203743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.986887   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.775229244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.986944   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.775527153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.987046   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.775671457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.987099   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.776004967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.987152   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.791300015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.987202   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.791478721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.987304   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.791611925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.987359   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.792335946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.987411   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/09f65a695303814b61d199dd53caa1efad532c76b04176a404206b865fd6b38a/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:09.987465   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5472c1fba3929b8a427273be545db7fb7df3c0ffbf035e24a1d3b71418b9e031/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:09.987572   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.150688061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.987622   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.150834665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.987678   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.151084573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.987744   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.152395011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.987796   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.341191051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.987912   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.341388457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.987976   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.341505460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.988048   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.342279283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.988114   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b5a777eba295e3b640d8d8a60aedcc20243d0f4a6fc4d3f3391b06fc6de0247a/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:09.988263   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.851490425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.988321   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.852225247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.988382   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.852338750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.988490   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.853459583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.988541   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1052]: time="2024-04-20T01:58:23.324898945Z" level=info msg="ignoring event" container=f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0419 18:59:09.988594   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:23.325982179Z" level=info msg="shim disconnected" id=f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919 namespace=moby
	I0419 18:59:09.988697   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:23.326071582Z" level=warning msg="cleaning up after shim disconnected" id=f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919 namespace=moby
	I0419 18:59:09.988751   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:23.326085983Z" level=info msg="cleaning up dead shim" namespace=moby
	I0419 18:59:09.988806   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1052]: time="2024-04-20T01:58:32.676558128Z" level=info msg="ignoring event" container=45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0419 18:59:09.988867   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:32.681127769Z" level=info msg="shim disconnected" id=45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702 namespace=moby
	I0419 18:59:09.988936   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:32.681255073Z" level=warning msg="cleaning up after shim disconnected" id=45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702 namespace=moby
	I0419 18:59:09.989006   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:32.681323075Z" level=info msg="cleaning up dead shim" namespace=moby
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356286643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356444648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356547351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356850260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.371313874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.372274603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.372497010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.373020725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.468874089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.469011493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.469033394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.469948221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.577907307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.578194516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.578360121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.578991939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:59:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f28a1e746a9b438367a8e05d2e1a085afb4abec4174f7a7eb80549e02b95047a/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:09.989634   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:59:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/75ff9f4e9dde29a997e4321dd3659a2ce7d479a75826a78c4d3525f1eb5f696f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.046055457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.046333943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.046360842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.047301594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.170326341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.170444835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.170467134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.171235195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:12.546276   14960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 18:59:12.575530   14960 command_runner.go:130] > 1877
	I0419 18:59:12.575637   14960 api_server.go:72] duration metric: took 1m6.9907902s to wait for apiserver process to appear ...
	I0419 18:59:12.575637   14960 api_server.go:88] waiting for apiserver healthz status ...
	I0419 18:59:12.586822   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 18:59:12.612864   14960 command_runner.go:130] > bd3aa93bac25
	I0419 18:59:12.612954   14960 logs.go:276] 1 containers: [bd3aa93bac25]
	I0419 18:59:12.625411   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 18:59:12.655543   14960 command_runner.go:130] > 2deabe4dbdf4
	I0419 18:59:12.656099   14960 logs.go:276] 1 containers: [2deabe4dbdf4]
	I0419 18:59:12.666517   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 18:59:12.693989   14960 command_runner.go:130] > 352cf21a3e20
	I0419 18:59:12.694081   14960 command_runner.go:130] > 627b84abf45c
	I0419 18:59:12.694081   14960 logs.go:276] 2 containers: [352cf21a3e20 627b84abf45c]
	I0419 18:59:12.705809   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 18:59:12.736207   14960 command_runner.go:130] > d57aee391c14
	I0419 18:59:12.736266   14960 command_runner.go:130] > e476774b8f77
	I0419 18:59:12.736266   14960 logs.go:276] 2 containers: [d57aee391c14 e476774b8f77]
	I0419 18:59:12.747925   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 18:59:12.773815   14960 command_runner.go:130] > e438af0f1ec9
	I0419 18:59:12.773815   14960 command_runner.go:130] > a6586791413d
	I0419 18:59:12.775175   14960 logs.go:276] 2 containers: [e438af0f1ec9 a6586791413d]
	I0419 18:59:12.786498   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 18:59:12.826401   14960 command_runner.go:130] > b67f2295d26c
	I0419 18:59:12.826452   14960 command_runner.go:130] > 9638ddcd5428
	I0419 18:59:12.826483   14960 logs.go:276] 2 containers: [b67f2295d26c 9638ddcd5428]
	I0419 18:59:12.836351   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 18:59:12.867729   14960 command_runner.go:130] > ae0b21715f86
	I0419 18:59:12.868779   14960 command_runner.go:130] > f8c798c99407
	I0419 18:59:12.868824   14960 logs.go:276] 2 containers: [ae0b21715f86 f8c798c99407]
	I0419 18:59:12.868875   14960 logs.go:123] Gathering logs for kubelet ...
	I0419 18:59:12.868875   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 18:59:12.901971   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0419 18:59:12.902423   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: I0420 01:57:51.575772    1390 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0419 18:59:12.902464   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: I0420 01:57:51.576306    1390 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:12.902464   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: I0420 01:57:51.577194    1390 server.go:927] "Client rotation is on, will bootstrap in background"
	I0419 18:59:12.902500   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: E0420 01:57:51.579651    1390 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: I0420 01:57:52.300689    1443 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: I0420 01:57:52.301056    1443 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: I0420 01:57:52.301551    1443 server.go:927] "Client rotation is on, will bootstrap in background"
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: E0420 01:57:52.301845    1443 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.955182    1526 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.955367    1526 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.955676    1526 server.go:927] "Client rotation is on, will bootstrap in background"
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.957661    1526 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.971626    1526 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.998144    1526 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.998312    1526 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.999775    1526 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:54.999948    1526 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-348000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0419 18:59:12.903076   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.000770    1526 topology_manager.go:138] "Creating topology manager with none policy"
	I0419 18:59:12.903076   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.000879    1526 container_manager_linux.go:301] "Creating device plugin manager"
	I0419 18:59:12.903076   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.001855    1526 state_mem.go:36] "Initialized new in-memory state store"
	I0419 18:59:12.903076   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.003861    1526 kubelet.go:400] "Attempting to sync node with API server"
	I0419 18:59:12.903129   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.003952    1526 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0419 18:59:12.903129   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.004045    1526 kubelet.go:312] "Adding apiserver pod source"
	I0419 18:59:12.903129   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.009472    1526 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0419 18:59:12.903129   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.017989    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.903216   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.018091    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.903304   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.019381    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.903304   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.019428    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.903344   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.019619    1526 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.1" apiVersion="v1"
	I0419 18:59:12.903344   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.022328    1526 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0419 18:59:12.903344   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.023051    1526 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.025680    1526 server.go:1264] "Started kubelet"
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.028955    1526 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.031361    1526 server.go:455] "Adding debug handlers to kubelet server"
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.034499    1526 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.035670    1526 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.036524    1526 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.19.42.24:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-348000.17c7da5cb9bb1787  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-348000,UID:multinode-348000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-348000,},FirstTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,LastTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-348
000,}"
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.053292    1526 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.062175    1526 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.067879    1526 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.097159    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="200ms"
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.116285    1526 factory.go:221] Registration of the systemd container factory successfully
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.117073    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.118285    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.117970    1526 reconciler.go:26] "Reconciler: start to sync state"
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.118962    1526 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.119576    1526 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.135081    1526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.165861    1526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166700    1526 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166759    1526 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166846    1526 state_mem.go:36] "Initialized new in-memory state store"
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166997    1526 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168395    1526 kubelet.go:2337] "Starting kubelet main sync loop"
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.168500    1526 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168338    1526 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168585    1526 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168613    1526 policy_none.go:49] "None policy: Start"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.167637    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.171087    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.172453    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.172557    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.187830    1526 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.187946    1526 state_mem.go:35] "Initializing new in-memory state store"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.189368    1526 state_mem.go:75] "Updated machine memory state"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.195268    1526 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.195483    1526 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.197626    1526 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.198638    1526 iptables.go:577] "Could not set up iptables canary" err=<
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.201551    1526 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-348000\" not found"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.269451    1526 topology_manager.go:215] "Topology Admit Handler" podUID="30aa2729d0c65b9f89e1ae2d151edd9b" podNamespace="kube-system" podName="kube-controller-manager-multinode-348000"
	I0419 18:59:12.906486   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.271913    1526 topology_manager.go:215] "Topology Admit Handler" podUID="92813b2aed63b63058d3fd06709fa24e" podNamespace="kube-system" podName="kube-scheduler-multinode-348000"
	I0419 18:59:12.906486   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.273779    1526 topology_manager.go:215] "Topology Admit Handler" podUID="af7a3c9321ace7e2a933260472b90113" podNamespace="kube-system" podName="kube-apiserver-multinode-348000"
	I0419 18:59:12.906539   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.275662    1526 topology_manager.go:215] "Topology Admit Handler" podUID="c0cfa3da6a3913c3e67500f6c3e9d72b" podNamespace="kube-system" podName="etcd-multinode-348000"
	I0419 18:59:12.906539   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.281258    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="476e3efb38684054cbc21c027cf1ddd3f9ca47bb829786f8636fd877fd4b2f81"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.281433    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dd294415aae178d6b9bed0368d49bedc6d0afa8f5b9ad0011c73ffcb2c24b3c"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.281454    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5d733991bf1a9e82ffd10768e0652c6c3f983ab24307142345cab3358f068bc"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.297657    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd9e5fae3950c99e6cc71d6166919d407b00212c93827d74e5b83f3896925c0a"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.310354    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="400ms"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.316552    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="187cb57784f4ebcba88e5bf725c118a7d2beec4f543d3864e8f389573f0b11f9"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.332421    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e420625b84be10aa87409a43f4296165b33ed76e82c3ba8a9214abd7177bd38"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.356050    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00d48e11227effb5f0316d58c24e374b4b3f9dcd1b98ac51d6b0038a72d47e42"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.372330    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.373779    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.376042    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da1d06ec238f43c7ad43cae75e142a6d15b9c8fb69f88ad8079f167f3f3a6fd4"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.392858    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7935893e9f22a54393d2b3d0a644f7c11a848d5604938074232342a8602e239f"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423082    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-ca-certs\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423312    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-flexvolume-dir\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423400    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-k8s-certs\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423427    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-kubeconfig\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423456    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af7a3c9321ace7e2a933260472b90113-ca-certs\") pod \"kube-apiserver-multinode-348000\" (UID: \"af7a3c9321ace7e2a933260472b90113\") " pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423489    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/c0cfa3da6a3913c3e67500f6c3e9d72b-etcd-data\") pod \"etcd-multinode-348000\" (UID: \"c0cfa3da6a3913c3e67500f6c3e9d72b\") " pod="kube-system/etcd-multinode-348000"
	I0419 18:59:12.907126   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423525    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:12.907126   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423552    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92813b2aed63b63058d3fd06709fa24e-kubeconfig\") pod \"kube-scheduler-multinode-348000\" (UID: \"92813b2aed63b63058d3fd06709fa24e\") " pod="kube-system/kube-scheduler-multinode-348000"
	I0419 18:59:12.907126   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423669    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af7a3c9321ace7e2a933260472b90113-k8s-certs\") pod \"kube-apiserver-multinode-348000\" (UID: \"af7a3c9321ace7e2a933260472b90113\") " pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:12.907221   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423703    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af7a3c9321ace7e2a933260472b90113-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-348000\" (UID: \"af7a3c9321ace7e2a933260472b90113\") " pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:12.907221   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423739    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/c0cfa3da6a3913c3e67500f6c3e9d72b-etcd-certs\") pod \"etcd-multinode-348000\" (UID: \"c0cfa3da6a3913c3e67500f6c3e9d72b\") " pod="kube-system/etcd-multinode-348000"
	I0419 18:59:12.907323   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.518144    1526 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.19.42.24:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-348000.17c7da5cb9bb1787  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-348000,UID:multinode-348000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-348000,},FirstTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,LastTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-348
000,}"
	I0419 18:59:12.907377   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.713067    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="800ms"
	I0419 18:59:12.907377   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.777032    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:12.907414   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.778597    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:12.907439   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.832721    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.907474   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.832971    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.907512   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: W0420 01:57:56.061439    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.907580   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.063005    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.907604   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: W0420 01:57:56.073517    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.073647    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: W0420 01:57:56.303763    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.303918    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.515345    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="1.6s"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: I0420 01:57:56.583532    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.584646    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:57:58 multinode-348000 kubelet[1526]: I0420 01:57:58.185924    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.850138    1526 kubelet_node_status.go:112] "Node was previously registered" node="multinode-348000"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.850459    1526 kubelet_node_status.go:76] "Successfully registered node" node="multinode-348000"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.852895    1526 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.854574    1526 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.855598    1526 setters.go:580] "Node became not ready" node="multinode-348000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-04-20T01:58:00Z","lastTransitionTime":"2024-04-20T01:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.022496    1526 apiserver.go:52] "Watching apiserver"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.028549    1526 topology_manager.go:215] "Topology Admit Handler" podUID="274342c4-c21f-4279-b0ea-743d8e2c1463" podNamespace="kube-system" podName="kube-proxy-kj76x"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.028950    1526 topology_manager.go:215] "Topology Admit Handler" podUID="46c91d5e-edfa-4254-a802-148047caeab5" podNamespace="kube-system" podName="kindnet-s4fsr"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.029150    1526 topology_manager.go:215] "Topology Admit Handler" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7w477"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.029359    1526 topology_manager.go:215] "Topology Admit Handler" podUID="ffa0cfb9-91fb-4d5b-abe7-11992c731b74" podNamespace="kube-system" podName="storage-provisioner"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.029596    1526 topology_manager.go:215] "Topology Admit Handler" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916" podNamespace="default" podName="busybox-fc5497c4f-xnz2k"
	I0419 18:59:12.908169   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.030004    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.908169   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.030339    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-348000" podUID="af4afa87-c484-4b73-9a4d-e86ddcd90049"
	I0419 18:59:12.908234   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.031127    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-348000" podUID="18f5e677-6a96-47ee-9f61-60ab9445eb92"
	I0419 18:59:12.908234   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.036486    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.908234   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.078433    1526 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-348000"
	I0419 18:59:12.908234   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.080072    1526 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.080948    1526 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.155980    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/274342c4-c21f-4279-b0ea-743d8e2c1463-xtables-lock\") pod \"kube-proxy-kj76x\" (UID: \"274342c4-c21f-4279-b0ea-743d8e2c1463\") " pod="kube-system/kube-proxy-kj76x"
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.156217    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/274342c4-c21f-4279-b0ea-743d8e2c1463-lib-modules\") pod \"kube-proxy-kj76x\" (UID: \"274342c4-c21f-4279-b0ea-743d8e2c1463\") " pod="kube-system/kube-proxy-kj76x"
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157104    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/46c91d5e-edfa-4254-a802-148047caeab5-cni-cfg\") pod \"kindnet-s4fsr\" (UID: \"46c91d5e-edfa-4254-a802-148047caeab5\") " pod="kube-system/kindnet-s4fsr"
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157248    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46c91d5e-edfa-4254-a802-148047caeab5-xtables-lock\") pod \"kindnet-s4fsr\" (UID: \"46c91d5e-edfa-4254-a802-148047caeab5\") " pod="kube-system/kindnet-s4fsr"
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.157178    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.157539    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:01.657504317 +0000 UTC m=+6.817666984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157392    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ffa0cfb9-91fb-4d5b-abe7-11992c731b74-tmp\") pod \"storage-provisioner\" (UID: \"ffa0cfb9-91fb-4d5b-abe7-11992c731b74\") " pod="kube-system/storage-provisioner"
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157844    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46c91d5e-edfa-4254-a802-148047caeab5-lib-modules\") pod \"kindnet-s4fsr\" (UID: \"46c91d5e-edfa-4254-a802-148047caeab5\") " pod="kube-system/kindnet-s4fsr"
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.176143    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89aa15d5f8e328791151d96100a36918" path="/var/lib/kubelet/pods/89aa15d5f8e328791151d96100a36918/volumes"
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.179130    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fef0b92f87f018a58c19217fdf5d6e1" path="/var/lib/kubelet/pods/8fef0b92f87f018a58c19217fdf5d6e1/volumes"
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.206903    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.207139    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.207264    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:01.707244177 +0000 UTC m=+6.867406744 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.241569    1526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-348000" podStartSLOduration=0.241545984 podStartE2EDuration="241.545984ms" podCreationTimestamp="2024-04-20 01:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-20 01:58:01.218870918 +0000 UTC m=+6.379033485" watchObservedRunningTime="2024-04-20 01:58:01.241545984 +0000 UTC m=+6.401708551"
	I0419 18:59:12.908848   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.287607    1526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-348000" podStartSLOduration=0.287584435 podStartE2EDuration="287.584435ms" podCreationTimestamp="2024-04-20 01:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-20 01:58:01.265671392 +0000 UTC m=+6.425834059" watchObservedRunningTime="2024-04-20 01:58:01.287584435 +0000 UTC m=+6.447747102"
	I0419 18:59:12.908848   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.663973    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:12.908889   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.664077    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:02.664058382 +0000 UTC m=+7.824220949 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:12.909015   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.764474    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.764518    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.764584    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:02.764566131 +0000 UTC m=+7.924728698 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: I0420 01:58:02.563904    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5a777eba295e3b640d8d8a60aedcc20243d0f4a6fc4d3f3391b06fc6de0247a"
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.564077    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: I0420 01:58:02.565075    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-348000" podUID="af4afa87-c484-4b73-9a4d-e86ddcd90049"
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.679358    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.679588    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:04.67956768 +0000 UTC m=+9.839730247 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.789713    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.791860    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.792206    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:04.792183185 +0000 UTC m=+9.952345752 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:03 multinode-348000 kubelet[1526]: E0420 01:58:03.170851    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.169519    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.700421    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.700676    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:08.700644486 +0000 UTC m=+13.860807053 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.801637    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909565   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.801751    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909607   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.801874    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:08.801835856 +0000 UTC m=+13.961998423 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909607   14960 command_runner.go:130] > Apr 20 01:58:05 multinode-348000 kubelet[1526]: E0420 01:58:05.169947    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.909607   14960 command_runner.go:130] > Apr 20 01:58:06 multinode-348000 kubelet[1526]: E0420 01:58:06.169499    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.909743   14960 command_runner.go:130] > Apr 20 01:58:07 multinode-348000 kubelet[1526]: E0420 01:58:07.170147    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.909795   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.169208    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.909795   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.751778    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.752347    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:16.752328447 +0000 UTC m=+21.912491114 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.852291    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.852347    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.852455    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:16.852435774 +0000 UTC m=+22.012598341 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:09 multinode-348000 kubelet[1526]: E0420 01:58:09.169017    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:10 multinode-348000 kubelet[1526]: E0420 01:58:10.169399    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:11 multinode-348000 kubelet[1526]: E0420 01:58:11.169467    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:12 multinode-348000 kubelet[1526]: E0420 01:58:12.169441    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:13 multinode-348000 kubelet[1526]: E0420 01:58:13.169983    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:14 multinode-348000 kubelet[1526]: E0420 01:58:14.169635    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:15 multinode-348000 kubelet[1526]: E0420 01:58:15.169488    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.169756    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.835157    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.835299    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:32.835279204 +0000 UTC m=+37.995441771 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.936116    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.910426   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.936169    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.910476   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.936232    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:32.936212581 +0000 UTC m=+38.096375148 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.910476   14960 command_runner.go:130] > Apr 20 01:58:17 multinode-348000 kubelet[1526]: E0420 01:58:17.169160    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.910604   14960 command_runner.go:130] > Apr 20 01:58:18 multinode-348000 kubelet[1526]: E0420 01:58:18.171760    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.910604   14960 command_runner.go:130] > Apr 20 01:58:19 multinode-348000 kubelet[1526]: E0420 01:58:19.169723    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.910604   14960 command_runner.go:130] > Apr 20 01:58:20 multinode-348000 kubelet[1526]: E0420 01:58:20.169542    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.910693   14960 command_runner.go:130] > Apr 20 01:58:21 multinode-348000 kubelet[1526]: E0420 01:58:21.169675    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.910744   14960 command_runner.go:130] > Apr 20 01:58:22 multinode-348000 kubelet[1526]: E0420 01:58:22.169364    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.910744   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: E0420 01:58:23.169569    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.910744   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: I0420 01:58:23.960680    1526 scope.go:117] "RemoveContainer" containerID="8a37c65d06fabf8d836ffb9a511bb6df5b549fa37051ef79f1f839076af60512"
	I0419 18:59:12.910744   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: I0420 01:58:23.961154    1526 scope.go:117] "RemoveContainer" containerID="f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919"
	I0419 18:59:12.910837   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: E0420 01:58:23.961603    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kindnet-cni pod=kindnet-s4fsr_kube-system(46c91d5e-edfa-4254-a802-148047caeab5)\"" pod="kube-system/kindnet-s4fsr" podUID="46c91d5e-edfa-4254-a802-148047caeab5"
	I0419 18:59:12.910837   14960 command_runner.go:130] > Apr 20 01:58:24 multinode-348000 kubelet[1526]: E0420 01:58:24.169608    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.910837   14960 command_runner.go:130] > Apr 20 01:58:25 multinode-348000 kubelet[1526]: E0420 01:58:25.169976    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.910837   14960 command_runner.go:130] > Apr 20 01:58:26 multinode-348000 kubelet[1526]: E0420 01:58:26.169734    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:27 multinode-348000 kubelet[1526]: E0420 01:58:27.170054    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:28 multinode-348000 kubelet[1526]: E0420 01:58:28.169260    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:29 multinode-348000 kubelet[1526]: E0420 01:58:29.169306    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:30 multinode-348000 kubelet[1526]: E0420 01:58:30.169857    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:31 multinode-348000 kubelet[1526]: E0420 01:58:31.169543    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.169556    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.891318    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.891496    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:59:04.891477649 +0000 UTC m=+70.051640216 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.992269    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.992577    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.992723    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:59:04.992688767 +0000 UTC m=+70.152851434 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: I0420 01:58:33.115355    1526 scope.go:117] "RemoveContainer" containerID="e248c230a4aa379bf469f41a95d1ea2033316d322a10b6da0ae06f656334b936"
	I0419 18:59:12.912019   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: I0420 01:58:33.115897    1526 scope.go:117] "RemoveContainer" containerID="45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702"
	I0419 18:59:12.912019   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: E0420 01:58:33.116183    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ffa0cfb9-91fb-4d5b-abe7-11992c731b74)\"" pod="kube-system/storage-provisioner" podUID="ffa0cfb9-91fb-4d5b-abe7-11992c731b74"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: E0420 01:58:33.169303    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:34 multinode-348000 kubelet[1526]: E0420 01:58:34.169175    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:35 multinode-348000 kubelet[1526]: E0420 01:58:35.169508    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 kubelet[1526]: E0420 01:58:36.169960    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 kubelet[1526]: I0420 01:58:36.170769    1526 scope.go:117] "RemoveContainer" containerID="f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:37 multinode-348000 kubelet[1526]: E0420 01:58:37.171433    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:38 multinode-348000 kubelet[1526]: E0420 01:58:38.169747    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:39 multinode-348000 kubelet[1526]: E0420 01:58:39.169252    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:40 multinode-348000 kubelet[1526]: E0420 01:58:40.169368    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:40 multinode-348000 kubelet[1526]: I0420 01:58:40.269590    1526 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 kubelet[1526]: I0420 01:58:45.169759    1526 scope.go:117] "RemoveContainer" containerID="45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]: I0420 01:58:55.162183    1526 scope.go:117] "RemoveContainer" containerID="490377504e57c3189163833390967e79bb80d222691d4402677feb6f25ed22f4"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]: I0420 01:58:55.206283    1526 scope.go:117] "RemoveContainer" containerID="53f6a00490766be2eb687e6fff052ca7a46ae16a0baf4551e956c81550d673b2"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]: E0420 01:58:55.212558    1526 iptables.go:577] "Could not set up iptables canary" err=<
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0419 18:59:12.912614   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0419 18:59:12.912614   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 kubelet[1526]: I0420 01:59:05.918992    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75ff9f4e9dde29a997e4321dd3659a2ce7d479a75826a78c4d3525f1eb5f696f"
	I0419 18:59:12.912614   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 kubelet[1526]: I0420 01:59:05.948376    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f28a1e746a9b438367a8e05d2e1a085afb4abec4174f7a7eb80549e02b95047a"
	I0419 18:59:12.955048   14960 logs.go:123] Gathering logs for kube-apiserver [bd3aa93bac25] ...
	I0419 18:59:12.956069   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd3aa93bac25"
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:57.501840       1 options.go:221] external host was not specified, using 172.19.42.24
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:57.505380       1 server.go:148] Version: v1.30.0
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:57.505690       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:58.138487       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:58.138530       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:58.138987       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:58.139098       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:58.139890       1 instance.go:299] Using reconciler: lease
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.078678       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.078889       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.354874       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.355339       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.630985       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.818361       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.834974       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.835019       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.835028       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.835870       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.835981       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.837241       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.838781       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.838919       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.838930       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.841133       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.841240       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.842492       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.842627       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.842640       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.843439       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.843519       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.843649       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.844516       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.847031       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.847132       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.847143       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.847848       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.847881       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.847889       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.849069       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.849173       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.851437       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.851563       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.851574       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.852258       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.852357       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.852367       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.855318       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.855413       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.855499       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.857232       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.859073       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.859177       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.859187       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.866540       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0419 18:59:12.986761   14960 command_runner.go:130] ! W0420 01:57:59.866633       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0419 18:59:12.986761   14960 command_runner.go:130] ! W0420 01:57:59.866643       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0419 18:59:12.986761   14960 command_runner.go:130] ! I0420 01:57:59.873672       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0419 18:59:12.986761   14960 command_runner.go:130] ! W0420 01:57:59.873814       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.986835   14960 command_runner.go:130] ! W0420 01:57:59.873827       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:12.986877   14960 command_runner.go:130] ! I0420 01:57:59.875959       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0419 18:59:12.986877   14960 command_runner.go:130] ! W0420 01:57:59.875999       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.986877   14960 command_runner.go:130] ! I0420 01:57:59.909243       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0419 18:59:12.986920   14960 command_runner.go:130] ! W0420 01:57:59.909284       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.986975   14960 command_runner.go:130] ! I0420 01:58:00.597195       1 secure_serving.go:213] Serving securely on [::]:8443
	I0419 18:59:12.986975   14960 command_runner.go:130] ! I0420 01:58:00.597666       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:12.987008   14960 command_runner.go:130] ! I0420 01:58:00.598134       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.597703       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.597737       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.600064       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.600948       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.601165       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.601445       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.602539       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.602852       1 aggregator.go:163] waiting for initial CRD sync...
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.603187       1 controller.go:78] Starting OpenAPI AggregationController
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.604023       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.604384       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.606631       1 available_controller.go:423] Starting AvailableConditionController
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.606857       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607138       1 controller.go:116] Starting legacy_token_tracking_controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607178       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607325       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607349       1 controller.go:139] Starting OpenAPI controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607381       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607407       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607409       1 naming_controller.go:291] Starting NamingConditionController
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607487       1 establishing_controller.go:76] Starting EstablishingController
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607512       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607530       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607546       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.608170       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.608198       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.608328       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.608421       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607383       1 controller.go:87] Starting OpenAPI V3 controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.709605       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.736531       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.737086       1 shared_informer.go:320] Caches are synced for configmaps
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.737192       1 aggregator.go:165] initial CRD sync complete...
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.737219       1 autoregister_controller.go:141] Starting autoregister controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.737225       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.737230       1 cache.go:39] Caches are synced for autoregister controller
	I0419 18:59:12.987664   14960 command_runner.go:130] ! I0420 01:58:00.740699       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 18:59:12.987744   14960 command_runner.go:130] ! I0420 01:58:00.741004       1 policy_source.go:224] refreshing policies
	I0419 18:59:12.987744   14960 command_runner.go:130] ! I0420 01:58:00.742672       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0419 18:59:12.987744   14960 command_runner.go:130] ! I0420 01:58:00.747054       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0419 18:59:12.987744   14960 command_runner.go:130] ! I0420 01:58:00.805770       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0419 18:59:12.987744   14960 command_runner.go:130] ! I0420 01:58:00.807460       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0419 18:59:12.987744   14960 command_runner.go:130] ! I0420 01:58:00.814456       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0419 18:59:12.987856   14960 command_runner.go:130] ! I0420 01:58:00.814490       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0419 18:59:12.987856   14960 command_runner.go:130] ! I0420 01:58:00.815844       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0419 18:59:12.987893   14960 command_runner.go:130] ! I0420 01:58:01.612010       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0419 18:59:12.987893   14960 command_runner.go:130] ! W0420 01:58:02.160618       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.42.231 172.19.42.24]
	I0419 18:59:12.987893   14960 command_runner.go:130] ! I0420 01:58:02.163332       1 controller.go:615] quota admission added evaluator for: endpoints
	I0419 18:59:12.987941   14960 command_runner.go:130] ! I0420 01:58:02.176968       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0419 18:59:12.987941   14960 command_runner.go:130] ! I0420 01:58:03.430204       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0419 18:59:12.987977   14960 command_runner.go:130] ! I0420 01:58:03.761410       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0419 18:59:12.987977   14960 command_runner.go:130] ! I0420 01:58:03.780335       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0419 18:59:12.987977   14960 command_runner.go:130] ! I0420 01:58:03.907022       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0419 18:59:12.988020   14960 command_runner.go:130] ! I0420 01:58:03.924019       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0419 18:59:12.988060   14960 command_runner.go:130] ! W0420 01:58:22.143512       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.42.24]
	I0419 18:59:12.996606   14960 logs.go:123] Gathering logs for kindnet [ae0b21715f86] ...
	I0419 18:59:12.996606   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0b21715f86"
	I0419 18:59:13.027805   14960 command_runner.go:130] ! I0420 01:58:36.715209       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0419 18:59:13.027904   14960 command_runner.go:130] ! I0420 01:58:36.715359       1 main.go:107] hostIP = 172.19.42.24
	I0419 18:59:13.027904   14960 command_runner.go:130] ! podIP = 172.19.42.24
	I0419 18:59:13.027904   14960 command_runner.go:130] ! I0420 01:58:36.715480       1 main.go:116] setting mtu 1500 for CNI 
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:36.715877       1 main.go:146] kindnetd IP family: "ipv4"
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:36.806023       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:37.413197       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:37.413291       1 main.go:227] handling current node
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:37.413685       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:37.413745       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:37.414005       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.19.32.249 Flags: [] Table: 0} 
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:37.506308       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:37.506405       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:37.506676       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.19.37.59 Flags: [] Table: 0} 
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:47.525508       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:47.525608       1 main.go:227] handling current node
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:47.525629       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:47.525638       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:47.526101       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:47.526135       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:57.538448       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:57.538834       1 main.go:227] handling current node
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:57.538899       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:57.538926       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:13.028514   14960 command_runner.go:130] ! I0420 01:58:57.539176       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:13.028514   14960 command_runner.go:130] ! I0420 01:58:57.539274       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:13.028514   14960 command_runner.go:130] ! I0420 01:59:07.555783       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:13.028588   14960 command_runner.go:130] ! I0420 01:59:07.555932       1 main.go:227] handling current node
	I0419 18:59:13.028588   14960 command_runner.go:130] ! I0420 01:59:07.556426       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:13.028738   14960 command_runner.go:130] ! I0420 01:59:07.556438       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:13.028738   14960 command_runner.go:130] ! I0420 01:59:07.556563       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:13.028738   14960 command_runner.go:130] ! I0420 01:59:07.556590       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:13.037007   14960 logs.go:123] Gathering logs for coredns [352cf21a3e20] ...
	I0419 18:59:13.037007   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 352cf21a3e20"
	I0419 18:59:13.069693   14960 command_runner.go:130] > .:53
	I0419 18:59:13.070597   14960 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93714cfd58e203ac2baa48ea9c7b435951d2a9faed7a5c70b4e84c89c6c1fe4c1dfa41f14b3ebf0f5941dade673a82eaad960061e673dd78dcb856db3393b39d
	I0419 18:59:13.070636   14960 command_runner.go:130] > CoreDNS-1.11.1
	I0419 18:59:13.070636   14960 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0419 18:59:13.070636   14960 command_runner.go:130] > [INFO] 127.0.0.1:51206 - 14298 "HINFO IN 4972057462503628469.2167329557243878603. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028297062s
	I0419 18:59:13.076229   14960 logs.go:123] Gathering logs for kube-proxy [e438af0f1ec9] ...
	I0419 18:59:13.076229   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e438af0f1ec9"
	I0419 18:59:13.108366   14960 command_runner.go:130] ! I0420 01:58:03.129201       1 server_linux.go:69] "Using iptables proxy"
	I0419 18:59:13.108735   14960 command_runner.go:130] ! I0420 01:58:03.201631       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.42.24"]
	I0419 18:59:13.108780   14960 command_runner.go:130] ! I0420 01:58:03.344058       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 18:59:13.108780   14960 command_runner.go:130] ! I0420 01:58:03.344107       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 18:59:13.108811   14960 command_runner.go:130] ! I0420 01:58:03.344137       1 server_linux.go:165] "Using iptables Proxier"
	I0419 18:59:13.108860   14960 command_runner.go:130] ! I0420 01:58:03.353394       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 18:59:13.108898   14960 command_runner.go:130] ! I0420 01:58:03.354462       1 server.go:872] "Version info" version="v1.30.0"
	I0419 18:59:13.108898   14960 command_runner.go:130] ! I0420 01:58:03.354693       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:13.108940   14960 command_runner.go:130] ! I0420 01:58:03.358325       1 config.go:192] "Starting service config controller"
	I0419 18:59:13.108978   14960 command_runner.go:130] ! I0420 01:58:03.358366       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 18:59:13.108978   14960 command_runner.go:130] ! I0420 01:58:03.358985       1 config.go:101] "Starting endpoint slice config controller"
	I0419 18:59:13.108978   14960 command_runner.go:130] ! I0420 01:58:03.359176       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 18:59:13.108978   14960 command_runner.go:130] ! I0420 01:58:03.358997       1 config.go:319] "Starting node config controller"
	I0419 18:59:13.109020   14960 command_runner.go:130] ! I0420 01:58:03.368409       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 18:59:13.109020   14960 command_runner.go:130] ! I0420 01:58:03.459372       1 shared_informer.go:320] Caches are synced for service config
	I0419 18:59:13.109051   14960 command_runner.go:130] ! I0420 01:58:03.459745       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 18:59:13.109051   14960 command_runner.go:130] ! I0420 01:58:03.470538       1 shared_informer.go:320] Caches are synced for node config
	I0419 18:59:13.113308   14960 logs.go:123] Gathering logs for kube-proxy [a6586791413d] ...
	I0419 18:59:13.113308   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6586791413d"
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.120497       1 server_linux.go:69] "Using iptables proxy"
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.156956       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.42.231"]
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.208282       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.208472       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.208501       1 server_linux.go:165] "Using iptables Proxier"
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.214693       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.216114       1 server.go:872] "Version info" version="v1.30.0"
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.216181       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.219192       1 config.go:192] "Starting service config controller"
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.219810       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.220079       1 config.go:101] "Starting endpoint slice config controller"
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.220093       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.221802       1 config.go:319] "Starting node config controller"
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.221980       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.320313       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.320380       1 shared_informer.go:320] Caches are synced for service config
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.322323       1 shared_informer.go:320] Caches are synced for node config
	I0419 18:59:13.148587   14960 logs.go:123] Gathering logs for kube-controller-manager [9638ddcd5428] ...
	I0419 18:59:13.148587   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9638ddcd5428"
	I0419 18:59:13.191813   14960 command_runner.go:130] ! I0420 01:35:03.372734       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:13.192583   14960 command_runner.go:130] ! I0420 01:35:03.812267       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0419 18:59:13.192583   14960 command_runner.go:130] ! I0420 01:35:03.812307       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:13.192583   14960 command_runner.go:130] ! I0420 01:35:03.816347       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:13.192788   14960 command_runner.go:130] ! I0420 01:35:03.816460       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:13.192805   14960 command_runner.go:130] ! I0420 01:35:03.817145       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0419 18:59:13.192805   14960 command_runner.go:130] ! I0420 01:35:03.817250       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:13.192805   14960 command_runner.go:130] ! I0420 01:35:07.961997       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0419 18:59:13.192855   14960 command_runner.go:130] ! I0420 01:35:07.962027       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0419 18:59:13.192855   14960 command_runner.go:130] ! I0420 01:35:07.977942       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0419 18:59:13.192893   14960 command_runner.go:130] ! I0420 01:35:07.978602       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0419 18:59:13.192893   14960 command_runner.go:130] ! I0420 01:35:07.980093       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0419 18:59:13.192893   14960 command_runner.go:130] ! I0420 01:35:07.989698       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0419 18:59:13.193592   14960 command_runner.go:130] ! I0420 01:35:07.990033       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0419 18:59:13.193925   14960 command_runner.go:130] ! I0420 01:35:07.990321       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.005238       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.005791       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.006985       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.018816       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.019229       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.019480       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.046904       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.047815       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.049696       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.050007       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.062049       1 shared_informer.go:320] Caches are synced for tokens
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.065356       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.065873       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.113476       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.114130       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.116086       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.129157       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.129533       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.129568       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.165596       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.166223       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.166242       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.211668       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.211749       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.211766       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.232421       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.232496       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.232934       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.232991       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.502058       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.502113       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! W0420 01:35:08.502140       1 shared_informer.go:597] resyncPeriod 21h44m16.388395173s is smaller than resyncCheckPeriod 22h35m59.940993284s and the informer has already started. Changing it to 22h35m59.940993284s
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.502208       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502278       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502298       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502314       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502330       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502351       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502407       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502437       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502458       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502479       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502501       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! W0420 01:35:08.502514       1 shared_informer.go:597] resyncPeriod 19h4m59.465157498s is smaller than resyncCheckPeriod 22h35m59.940993284s and the informer has already started. Changing it to 22h35m59.940993284s
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502638       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502666       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502684       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502713       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502732       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502771       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502793       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502820       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.503928       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.503949       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.504053       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.534828       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.534961       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.674769       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.675139       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.675159       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.825012       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.825352       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.825549       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.067591       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.068206       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.068502       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.068578       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.320310       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.320746       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.321134       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.516184       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.516262       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.691568       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.693516       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.693713       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.694525       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.933130       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.933168       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.936074       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.217647       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.218375       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.218475       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.267124       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.267436       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.267570       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.268204       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.268422       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0419 18:59:13.194971   14960 command_runner.go:130] ! E0420 01:35:10.316394       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.316683       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.472792       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.472905       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.472918       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:10.624680       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:10.624742       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:10.624753       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:10.772273       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:10.772422       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:10.773389       1 shared_informer.go:313] Waiting for caches to sync for job
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:10.922317       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:10.922464       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:10.922478       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.070777       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.071059       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.071119       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.071166       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.071195       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.071205       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.222012       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.222056       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.222746       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.372624       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.372812       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.372965       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.522757       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.522983       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.523000       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.671210       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.671410       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.671429       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.820688       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.821596       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.821935       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0419 18:59:13.195959   14960 command_runner.go:130] ! E0420 01:35:11.971137       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.971301       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.971316       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.971323       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.121255       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.121746       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.121947       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.274169       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.274383       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.274402       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.318009       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.318126       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.318164       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.318524       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.318628       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.318650       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.319568       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.319800       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.319996       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.320096       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.320128       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.320161       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.320270       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:22.381189       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:22.381256       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:22.381472       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:22.381508       1 shared_informer.go:313] Waiting for caches to sync for node
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:22.395580       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:22.395660       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:22.396587       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:22.396886       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.405182       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.428741       1 shared_informer.go:320] Caches are synced for service account
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.430037       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.433041       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.440027       1 shared_informer.go:320] Caches are synced for namespace
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.466474       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.469554       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.477923       1 shared_informer.go:320] Caches are synced for PV protection
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.479748       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.479794       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.480700       1 shared_informer.go:320] Caches are synced for PVC protection
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.492034       1 shared_informer.go:320] Caches are synced for expand
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.492084       1 shared_informer.go:320] Caches are synced for endpoint
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.492130       1 shared_informer.go:320] Caches are synced for job
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.497920       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.498399       1 shared_informer.go:320] Caches are synced for node
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.498473       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.498515       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.498526       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.498531       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.508187       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000\" does not exist"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.508396       1 shared_informer.go:320] Caches are synced for GC
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.512585       1 shared_informer.go:320] Caches are synced for crt configmap
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.520820       1 shared_informer.go:320] Caches are synced for daemon sets
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.521073       1 shared_informer.go:320] Caches are synced for stateful set
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.521189       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.521223       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.521268       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.527709       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.528722       1 shared_informer.go:320] Caches are synced for cronjob
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.528751       1 shared_informer.go:320] Caches are synced for ephemeral
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.528767       1 shared_informer.go:320] Caches are synced for TTL
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.529370       1 shared_informer.go:320] Caches are synced for HPA
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.529414       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.529477       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.529509       1 shared_informer.go:320] Caches are synced for persistent volume
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.552273       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000" podCIDRs=["10.244.0.0/24"]
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.569198       1 shared_informer.go:320] Caches are synced for taint
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.569287       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.569354       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.569429       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.574991       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.590559       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.623057       1 shared_informer.go:320] Caches are synced for deployment
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.623597       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.651041       1 shared_informer.go:320] Caches are synced for disruption
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.699011       1 shared_informer.go:320] Caches are synced for attach detach
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.705303       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.706815       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:23.168892       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:23.169115       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:23.179171       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:23.263116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="374.4156ms"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:23.291471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.172623ms"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:23.291547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.106µs"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:23.578182       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="73.803114ms"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:23.630233       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.666311ms"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:23.630467       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="183.125µs"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:36.906373       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="291.116µs"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:36.934151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="76.104µs"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:35:37.573034       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:35:39.217159       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.488µs"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:35:39.265403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.862669ms"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:35:39.266023       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="552.786µs"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:38:18.575680       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m02\" does not exist"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:38:18.590900       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m02" podCIDRs=["10.244.1.0/24"]
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:38:22.613051       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m02"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:38:37.669535       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:39:03.031296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.090021ms"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:39:03.053897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.363721ms"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:39:03.054543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.499µs"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:39:05.783927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.434034ms"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:39:05.784276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="108.901µs"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:39:07.103598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.163039ms"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:39:07.104054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.4µs"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:42:52.390190       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:42:52.390530       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m03\" does not exist"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:42:52.403944       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m03" podCIDRs=["10.244.2.0/24"]
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:42:52.676079       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m03"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:43:11.211743       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:50:42.818871       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:53:22.621370       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:53:28.752017       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m03\" does not exist"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:53:28.753300       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:53:28.789161       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m03" podCIDRs=["10.244.3.0/24"]
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:53:36.097701       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m03"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:55:13.205537       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.217572   14960 logs.go:123] Gathering logs for Docker ...
	I0419 18:59:13.217572   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 18:59:13.251198   14960 command_runner.go:130] > Apr 20 01:56:27 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:13.251259   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:13.251259   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:13.251259   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:13.251313   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0419 18:59:13.251348   14960 command_runner.go:130] > Apr 20 01:56:28 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:28 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:28 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 systemd[1]: Starting Docker Application Container Engine...
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[657]: time="2024-04-20T01:57:18.710176447Z" level=info msg="Starting up"
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[657]: time="2024-04-20T01:57:18.711651787Z" level=info msg="containerd not running, starting managed containerd"
	I0419 18:59:13.251914   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[657]: time="2024-04-20T01:57:18.716746379Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=664
	I0419 18:59:13.251914   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.747165139Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0419 18:59:13.251963   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778478063Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0419 18:59:13.252020   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778645056Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0419 18:59:13.252058   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778743452Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0419 18:59:13.252058   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778860747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.252090   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.780842867Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:13.252145   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.780950062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781281849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781381945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781405744Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781418543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781890324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.782561296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786065554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786174049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786324143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786418639Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.787110911Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.787239206Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.787257405Z" level=info msg="metadata content store policy set" policy=shared
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794203322Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794271219Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794292218Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794308818Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794325217Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794399514Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794805397Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795021089Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795123284Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795209281Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795227280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795252079Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.252776   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795270178Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.252876   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795305177Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.252876   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795321176Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.252876   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795336476Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.252876   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795368674Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.252876   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795383074Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.253037   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795405873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253037   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795423972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253037   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795438172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253098   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795453671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253098   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795468970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253137   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795483970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253137   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795576866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253186   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795594465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253186   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795610465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253224   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795628364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253265   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795642863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253265   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795657163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253305   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795671762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253342   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795713760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0419 18:59:13.253342   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795756259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253381   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795811856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253416   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795843255Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0419 18:59:13.253416   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795920052Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0419 18:59:13.253455   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795944151Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0419 18:59:13.253489   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796175542Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0419 18:59:13.253561   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796194141Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0419 18:59:13.253561   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796263238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253611   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796305336Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0419 18:59:13.253646   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796319336Z" level=info msg="NRI interface is disabled by configuration."
	I0419 18:59:13.253683   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.797416591Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0419 18:59:13.253683   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.797499188Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0419 18:59:13.253717   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.797659381Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0419 18:59:13.253717   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.798178860Z" level=info msg="containerd successfully booted in 0.054054s"
	I0419 18:59:13.253788   14960 command_runner.go:130] > Apr 20 01:57:19 multinode-348000 dockerd[657]: time="2024-04-20T01:57:19.782299514Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0419 18:59:13.253788   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.015692930Z" level=info msg="Loading containers: start."
	I0419 18:59:13.253826   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.458486133Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0419 18:59:13.253859   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.551244732Z" level=info msg="Loading containers: done."
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.579065252Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.579904847Z" level=info msg="Daemon has completed initialization"
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.637363974Z" level=info msg="API listen on [::]:2376"
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 systemd[1]: Started Docker Application Container Engine.
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.639403561Z" level=info msg="API listen on /var/run/docker.sock"
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.472939019Z" level=info msg="Processing signal 'terminated'"
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 systemd[1]: Stopping Docker Application Container Engine...
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.475778002Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.476696029Z" level=info msg="Daemon shutdown complete"
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.476992338Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.477157542Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 systemd[1]: docker.service: Deactivated successfully.
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 systemd[1]: Stopped Docker Application Container Engine.
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 systemd[1]: Starting Docker Application Container Engine...
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:47.551071055Z" level=info msg="Starting up"
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:47.552229889Z" level=info msg="containerd not running, starting managed containerd"
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:47.555196776Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1058
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.593728507Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623742487Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623851391Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623939793Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623957394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624003795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624024296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624225802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624329205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624352205Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624363806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624391206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624622913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.627825907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.627876709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.254479   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628096615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:13.254529   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628227419Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0419 18:59:13.254529   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628259620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0419 18:59:13.254529   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628280321Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0419 18:59:13.254529   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628292621Z" level=info msg="metadata content store policy set" policy=shared
	I0419 18:59:13.254529   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628514127Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0419 18:59:13.254634   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628716033Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0419 18:59:13.254634   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628764035Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0419 18:59:13.254676   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628783935Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0419 18:59:13.254676   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628872138Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0419 18:59:13.254729   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628938240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0419 18:59:13.254729   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.629513057Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0419 18:59:13.254767   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.629754764Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0419 18:59:13.254809   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.629936569Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0419 18:59:13.254849   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630060973Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0419 18:59:13.254849   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630086474Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.254892   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630105074Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.254892   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630122275Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.254938   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630140375Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.254938   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630157976Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.254980   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630174076Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.255019   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630191277Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.255064   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630206077Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.255103   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630234378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255103   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630252178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255138   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630267579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630283379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630298980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630314780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630328781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630360082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630377682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630410083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630423583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630455984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630487185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630505186Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630528987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630643490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630666391Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630895497Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630922398Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630934798Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630945799Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.631020001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.631067102Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.631083303Z" level=info msg="NRI interface is disabled by configuration."
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632230736Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632319639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632396541Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0419 18:59:13.255807   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632594347Z" level=info msg="containerd successfully booted in 0.042627s"
	I0419 18:59:13.255807   14960 command_runner.go:130] > Apr 20 01:57:48 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:48.604760074Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0419 18:59:13.255807   14960 command_runner.go:130] > Apr 20 01:57:48 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:48.637031921Z" level=info msg="Loading containers: start."
	I0419 18:59:13.255807   14960 command_runner.go:130] > Apr 20 01:57:48 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:48.936729515Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0419 18:59:13.255925   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.021589305Z" level=info msg="Loading containers: done."
	I0419 18:59:13.255925   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.048182786Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	I0419 18:59:13.255925   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.048316590Z" level=info msg="Daemon has completed initialization"
	I0419 18:59:13.255925   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.095567976Z" level=info msg="API listen on /var/run/docker.sock"
	I0419 18:59:13.256021   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 systemd[1]: Started Docker Application Container Engine.
	I0419 18:59:13.256021   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.098304756Z" level=info msg="API listen on [::]:2376"
	I0419 18:59:13.256021   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:13.256021   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:13.256116   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:13.256116   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:13.256116   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0419 18:59:13.256116   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Loaded network plugin cni"
	I0419 18:59:13.256195   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0419 18:59:13.256195   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0419 18:59:13.256195   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0419 18:59:13.256195   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0419 18:59:13.256195   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Start cri-dockerd grpc backend"
	I0419 18:59:13.256274   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0419 18:59:13.256274   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-xnz2k_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"476e3efb38684054cbc21c027cf1ddd3f9ca47bb829786f8636fd877fd4b2f81\""
	I0419 18:59:13.256353   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-7w477_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"2dd294415aae178d6b9bed0368d49bedc6d0afa8f5b9ad0011c73ffcb2c24b3c\""
	I0419 18:59:13.256353   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.930297132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.256353   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.930785146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.256430   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.930860749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.256430   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.931659072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.256507   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002064338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.256507   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002134840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.256507   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002149541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.256584   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002292345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.256584   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e8baa597c1467ae8c3a1ce9abf0a378ddcffed5a93f7b41dddb4ce4511320dfd/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:13.256661   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151299517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.256661   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151377019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.256661   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151407720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.256738   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151504323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.256738   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169004837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.256738   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169190142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.256816   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169211543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.256816   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169324146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.256816   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/118cca57d1f547838d0c2442f2945e9daf9b041170bf162489525286bf3d75c2/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:13.256893   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7052a6f04def38545970026f2934eb29913066396b26eb86f6675e7c0c685db/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:13.256893   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ab9ff1d9068805d6a2ad10084128436e5b1fcaaa8c64f2f1a5e811455f0f99ee/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:13.256970   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441120322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.256970   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441388229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.256970   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441493933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257047   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441783141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257047   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.541538868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257047   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.541743874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257123   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.541768275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257123   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.542244089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257199   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.635958239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257199   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.636305549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257199   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.636479754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257276   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.636776363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257276   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.703176711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257352   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.703241613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257352   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.703253713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257352   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.704949863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257459   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:00Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0419 18:59:13.257459   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.682944236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257530   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.683066839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257530   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.683087340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257530   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.683203743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257605   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.775229244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257605   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.775527153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257605   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.775671457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257677   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.776004967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257677   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.791300015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257677   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.791478721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257750   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.791611925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257750   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.792335946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257750   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/09f65a695303814b61d199dd53caa1efad532c76b04176a404206b865fd6b38a/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:13.257822   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5472c1fba3929b8a427273be545db7fb7df3c0ffbf035e24a1d3b71418b9e031/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:13.257822   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.150688061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257899   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.150834665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257899   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.151084573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257940   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.152395011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.341191051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.341388457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.341505460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.342279283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b5a777eba295e3b640d8d8a60aedcc20243d0f4a6fc4d3f3391b06fc6de0247a/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.851490425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.852225247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.852338750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.853459583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1052]: time="2024-04-20T01:58:23.324898945Z" level=info msg="ignoring event" container=f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:23.325982179Z" level=info msg="shim disconnected" id=f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919 namespace=moby
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:23.326071582Z" level=warning msg="cleaning up after shim disconnected" id=f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919 namespace=moby
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:23.326085983Z" level=info msg="cleaning up dead shim" namespace=moby
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1052]: time="2024-04-20T01:58:32.676558128Z" level=info msg="ignoring event" container=45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:32.681127769Z" level=info msg="shim disconnected" id=45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702 namespace=moby
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:32.681255073Z" level=warning msg="cleaning up after shim disconnected" id=45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702 namespace=moby
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:32.681323075Z" level=info msg="cleaning up dead shim" namespace=moby
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356286643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356444648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356547351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356850260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.371313874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.372274603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.372497010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.373020725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.258489   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.468874089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.258489   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.469011493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.258489   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.469033394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.258489   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.469948221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.258489   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.577907307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.258489   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.578194516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.258489   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.578360121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.258620   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.578991939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.258658   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:59:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f28a1e746a9b438367a8e05d2e1a085afb4abec4174f7a7eb80549e02b95047a/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:59:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/75ff9f4e9dde29a997e4321dd3659a2ce7d479a75826a78c4d3525f1eb5f696f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.046055457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.046333943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.046360842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.047301594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.170326341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.170444835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.170467134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.171235195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:12 multinode-348000 dockerd[1052]: 2024/04/20 01:59:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.259226   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.259226   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.259226   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.259226   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.293582   14960 logs.go:123] Gathering logs for container status ...
	I0419 18:59:13.293582   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 18:59:13.374555   14960 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0419 18:59:13.374555   14960 command_runner.go:130] > d608b74b0597f       8c811b4aec35f                                                                                         8 seconds ago        Running             busybox                   1                   75ff9f4e9dde2       busybox-fc5497c4f-xnz2k
	I0419 18:59:13.374555   14960 command_runner.go:130] > 352cf21a3e202       cbb01a7bd410d                                                                                         8 seconds ago        Running             coredns                   1                   f28a1e746a9b4       coredns-7db6d8ff4d-7w477
	I0419 18:59:13.374555   14960 command_runner.go:130] > c6f350bee7762       6e38f40d628db                                                                                         28 seconds ago       Running             storage-provisioner       2                   5472c1fba3929       storage-provisioner
	I0419 18:59:13.374555   14960 command_runner.go:130] > ae0b21715f861       4950bb10b3f87                                                                                         37 seconds ago       Running             kindnet-cni               2                   b5a777eba295e       kindnet-s4fsr
	I0419 18:59:13.374555   14960 command_runner.go:130] > f8c798c994078       4950bb10b3f87                                                                                         About a minute ago   Exited              kindnet-cni               1                   b5a777eba295e       kindnet-s4fsr
	I0419 18:59:13.375294   14960 command_runner.go:130] > 45383c4290ad1       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   5472c1fba3929       storage-provisioner
	I0419 18:59:13.375294   14960 command_runner.go:130] > e438af0f1ec9e       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   09f65a6953038       kube-proxy-kj76x
	I0419 18:59:13.375294   14960 command_runner.go:130] > 2deabe4dbdf41       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   ab9ff1d906880       etcd-multinode-348000
	I0419 18:59:13.375433   14960 command_runner.go:130] > bd3aa93bac25b       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   d7052a6f04def       kube-apiserver-multinode-348000
	I0419 18:59:13.375471   14960 command_runner.go:130] > b67f2295d26ca       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   118cca57d1f54       kube-controller-manager-multinode-348000
	I0419 18:59:13.375497   14960 command_runner.go:130] > d57aee391c146       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   e8baa597c1467       kube-scheduler-multinode-348000
	I0419 18:59:13.375544   14960 command_runner.go:130] > d8afb3e1fb946       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   476e3efb38684       busybox-fc5497c4f-xnz2k
	I0419 18:59:13.375604   14960 command_runner.go:130] > 627b84abf45cd       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   2dd294415aae1       coredns-7db6d8ff4d-7w477
	I0419 18:59:13.375629   14960 command_runner.go:130] > a6586791413d0       a0bf559e280cf                                                                                         23 minutes ago       Exited              kube-proxy                0                   7935893e9f22a       kube-proxy-kj76x
	I0419 18:59:13.375629   14960 command_runner.go:130] > 9638ddcd54285       c7aad43836fa5                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   6e420625b84be       kube-controller-manager-multinode-348000
	I0419 18:59:13.375629   14960 command_runner.go:130] > e476774b8f77e       259c8277fcbbc                                                                                         24 minutes ago       Exited              kube-scheduler            0                   e5d733991bf1a       kube-scheduler-multinode-348000
	I0419 18:59:13.377871   14960 logs.go:123] Gathering logs for etcd [2deabe4dbdf4] ...
	I0419 18:59:13.378001   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2deabe4dbdf4"
	I0419 18:59:13.408304   14960 command_runner.go:130] ! {"level":"warn","ts":"2024-04-20T01:57:57.046906Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0419 18:59:13.408523   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.051203Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.19.42.24:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.19.42.24:2380","--initial-cluster=multinode-348000=https://172.19.42.24:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.19.42.24:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.19.42.24:2380","--name=multinode-348000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-ref
resh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0419 18:59:13.408523   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.05132Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0419 18:59:13.408641   14960 command_runner.go:130] ! {"level":"warn","ts":"2024-04-20T01:57:57.053068Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0419 18:59:13.408641   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.053085Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.19.42.24:2380"]}
	I0419 18:59:13.408714   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.053402Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0419 18:59:13.408756   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.06821Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.19.42.24:2379"]}
	I0419 18:59:13.408836   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.071769Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-348000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.19.42.24:2380"],"listen-peer-urls":["https://172.19.42.24:2380"],"advertise-client-urls":["https://172.19.42.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.42.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cl
uster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0419 18:59:13.408943   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.117145Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"37.959314ms"}
	I0419 18:59:13.408969   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.163657Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0419 18:59:13.409000   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186114Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","commit-index":1996}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c switched to configuration voters=()"}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became follower at term 2"}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 4fba18389b33806c [peers: [], term: 2, commit: 1996, applied: 0, lastindex: 1996, lastterm: 2]"}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"warn","ts":"2024-04-20T01:57:57.204366Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.210889Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1364}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.22333Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1726}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.233905Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.247902Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"4fba18389b33806c","timeout":"7s"}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.252957Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"4fba18389b33806c"}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.253239Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"4fba18389b33806c","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.257675Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0419 18:59:13.409630   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.259962Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0419 18:59:13.409630   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.260237Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0419 18:59:13.409630   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.26046Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0419 18:59:13.409630   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c switched to configuration voters=(5744930906065567852)"}
	I0419 18:59:13.409779   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264281Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","added-peer-id":"4fba18389b33806c","added-peer-peer-urls":["https://172.19.42.231:2380"]}
	I0419 18:59:13.409839   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264439Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","cluster-version":"3.5"}
	I0419 18:59:13.409839   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264612Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0419 18:59:13.409839   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.271976Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0419 18:59:13.409839   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.273753Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4fba18389b33806c","initial-advertise-peer-urls":["https://172.19.42.24:2380"],"listen-peer-urls":["https://172.19.42.24:2380"],"advertise-client-urls":["https://172.19.42.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.42.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0419 18:59:13.410177   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.27526Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0419 18:59:13.410177   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.27622Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.42.24:2380"}
	I0419 18:59:13.410177   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.277207Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.42.24:2380"}
	I0419 18:59:13.410255   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c is starting a new election at term 2"}
	I0419 18:59:13.410282   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became pre-candidate at term 2"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c received MsgPreVoteResp from 4fba18389b33806c at term 2"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became candidate at term 3"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c received MsgVoteResp from 4fba18389b33806c at term 3"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became leader at term 3"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4fba18389b33806c elected leader 4fba18389b33806c at term 3"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.994477Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4fba18389b33806c","local-member-attributes":"{Name:multinode-348000 ClientURLs:[https://172.19.42.24:2379]}","request-path":"/0/members/4fba18389b33806c/attributes","cluster-id":"dca2ede42d67bc1c","publish-timeout":"7s"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.994493Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.994512Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.996572Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.996617Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.999043Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.42.24:2379"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.999341Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0419 18:59:13.417189   14960 logs.go:123] Gathering logs for coredns [627b84abf45c] ...
	I0419 18:59:13.418082   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627b84abf45c"
	I0419 18:59:13.455979   14960 command_runner.go:130] > .:53
	I0419 18:59:13.456857   14960 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93714cfd58e203ac2baa48ea9c7b435951d2a9faed7a5c70b4e84c89c6c1fe4c1dfa41f14b3ebf0f5941dade673a82eaad960061e673dd78dcb856db3393b39d
	I0419 18:59:13.456857   14960 command_runner.go:130] > CoreDNS-1.11.1
	I0419 18:59:13.456857   14960 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0419 18:59:13.456857   14960 command_runner.go:130] > [INFO] 127.0.0.1:37904 - 37003 "HINFO IN 1336380353163369387.5260466772500757990. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.053891439s
	I0419 18:59:13.456857   14960 command_runner.go:130] > [INFO] 10.244.1.2:47846 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002913s
	I0419 18:59:13.456936   14960 command_runner.go:130] > [INFO] 10.244.1.2:60728 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.118385602s
	I0419 18:59:13.456936   14960 command_runner.go:130] > [INFO] 10.244.1.2:48827 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.043741711s
	I0419 18:59:13.456979   14960 command_runner.go:130] > [INFO] 10.244.1.2:57126 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.111854404s
	I0419 18:59:13.456979   14960 command_runner.go:130] > [INFO] 10.244.0.3:44468 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001971s
	I0419 18:59:13.456979   14960 command_runner.go:130] > [INFO] 10.244.0.3:58477 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.002287005s
	I0419 18:59:13.457024   14960 command_runner.go:130] > [INFO] 10.244.0.3:39825 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000198301s
	I0419 18:59:13.457024   14960 command_runner.go:130] > [INFO] 10.244.0.3:54956 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000604s
	I0419 18:59:13.457049   14960 command_runner.go:130] > [INFO] 10.244.1.2:48593 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001261s
	I0419 18:59:13.457049   14960 command_runner.go:130] > [INFO] 10.244.1.2:58743 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.027871268s
	I0419 18:59:13.457098   14960 command_runner.go:130] > [INFO] 10.244.1.2:44517 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002274s
	I0419 18:59:13.457098   14960 command_runner.go:130] > [INFO] 10.244.1.2:35998 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000219501s
	I0419 18:59:13.457158   14960 command_runner.go:130] > [INFO] 10.244.1.2:58770 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012982932s
	I0419 18:59:13.457158   14960 command_runner.go:130] > [INFO] 10.244.1.2:55456 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174201s
	I0419 18:59:13.457199   14960 command_runner.go:130] > [INFO] 10.244.1.2:59031 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001304s
	I0419 18:59:13.457247   14960 command_runner.go:130] > [INFO] 10.244.1.2:41687 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000198401s
	I0419 18:59:13.457247   14960 command_runner.go:130] > [INFO] 10.244.0.3:46929 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003044s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:35877 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000325701s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:53705 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000318601s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:40560 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164401s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:53239 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001239s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:39754 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001464s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:41397 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001668s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:49126 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001646s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.1.2:37850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115501s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.1.2:44063 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001443s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.1.2:39924 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000607s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.1.2:53244 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000622s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:52017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001879s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:55488 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000814s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:57536 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000778s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:45454 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001788s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.1.2:52247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001095s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.1.2:46954 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001143s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.1.2:47574 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098701s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.1.2:36658 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000170301s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:35421 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001002s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:41995 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132201s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:36431 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001956s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:38168 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000222s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0419 18:59:13.461762   14960 logs.go:123] Gathering logs for kube-scheduler [e476774b8f77] ...
	I0419 18:59:13.461793   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e476774b8f77"
	I0419 18:59:13.493215   14960 command_runner.go:130] ! I0420 01:35:03.474569       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:13.493339   14960 command_runner.go:130] ! W0420 01:35:04.965330       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0419 18:59:13.493339   14960 command_runner.go:130] ! W0420 01:35:04.965379       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:13.493407   14960 command_runner.go:130] ! W0420 01:35:04.965392       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0419 18:59:13.493407   14960 command_runner.go:130] ! W0420 01:35:04.965399       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0419 18:59:13.493466   14960 command_runner.go:130] ! I0420 01:35:05.040739       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0419 18:59:13.493466   14960 command_runner.go:130] ! I0420 01:35:05.040800       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:13.493466   14960 command_runner.go:130] ! I0420 01:35:05.044777       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0419 18:59:13.493528   14960 command_runner.go:130] ! I0420 01:35:05.045192       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 18:59:13.493569   14960 command_runner.go:130] ! I0420 01:35:05.045423       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:13.493598   14960 command_runner.go:130] ! I0420 01:35:05.046180       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:13.493598   14960 command_runner.go:130] ! W0420 01:35:05.063208       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:13.493731   14960 command_runner.go:130] ! E0420 01:35:05.064240       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:13.493731   14960 command_runner.go:130] ! W0420 01:35:05.063609       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.493797   14960 command_runner.go:130] ! E0420 01:35:05.065130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.493821   14960 command_runner.go:130] ! W0420 01:35:05.063676       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! E0420 01:35:05.065433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! W0420 01:35:05.063732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! E0420 01:35:05.065801       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! W0420 01:35:05.063780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! E0420 01:35:05.066820       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! W0420 01:35:05.063927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! E0420 01:35:05.067122       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! W0420 01:35:05.063973       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! E0420 01:35:05.069517       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! W0420 01:35:05.064025       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! E0420 01:35:05.069884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! W0420 01:35:05.064095       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! E0420 01:35:05.070309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! W0420 01:35:05.064163       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! E0420 01:35:05.070884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! W0420 01:35:05.070236       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! E0420 01:35:05.071293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:13.494384   14960 command_runner.go:130] ! W0420 01:35:05.070677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! E0420 01:35:05.072125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! W0420 01:35:05.070741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! E0420 01:35:05.073528       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! W0420 01:35:05.072410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! E0420 01:35:05.073910       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! W0420 01:35:05.072540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! E0420 01:35:05.074332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! W0420 01:35:05.987809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! E0420 01:35:05.988072       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! W0420 01:35:06.078924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! E0420 01:35:06.079045       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! W0420 01:35:06.146102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! E0420 01:35:06.146225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! W0420 01:35:06.213142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! E0420 01:35:06.213279       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! W0420 01:35:06.278808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.495027   14960 command_runner.go:130] ! E0420 01:35:06.279232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.495027   14960 command_runner.go:130] ! W0420 01:35:06.310265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:13.495027   14960 command_runner.go:130] ! E0420 01:35:06.311126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:13.495027   14960 command_runner.go:130] ! W0420 01:35:06.333128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:13.495027   14960 command_runner.go:130] ! E0420 01:35:06.333531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:13.495193   14960 command_runner.go:130] ! W0420 01:35:06.355993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:13.495193   14960 command_runner.go:130] ! E0420 01:35:06.356053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:13.495193   14960 command_runner.go:130] ! W0420 01:35:06.356154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:13.495193   14960 command_runner.go:130] ! E0420 01:35:06.356365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:13.495381   14960 command_runner.go:130] ! W0420 01:35:06.490128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:13.495438   14960 command_runner.go:130] ! E0420 01:35:06.490240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:13.495438   14960 command_runner.go:130] ! W0420 01:35:06.496247       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:13.495499   14960 command_runner.go:130] ! E0420 01:35:06.496709       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:13.495567   14960 command_runner.go:130] ! W0420 01:35:06.552817       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.495567   14960 command_runner.go:130] ! E0420 01:35:06.552917       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.495567   14960 command_runner.go:130] ! W0420 01:35:06.607496       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.495650   14960 command_runner.go:130] ! E0420 01:35:06.607914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.495650   14960 command_runner.go:130] ! W0420 01:35:06.608255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:13.495709   14960 command_runner.go:130] ! E0420 01:35:06.608488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:13.495777   14960 command_runner.go:130] ! W0420 01:35:06.623642       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:13.495777   14960 command_runner.go:130] ! E0420 01:35:06.624029       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:13.495834   14960 command_runner.go:130] ! I0420 01:35:09.746203       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:13.495834   14960 command_runner.go:130] ! I0420 01:55:30.893306       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0419 18:59:13.495834   14960 command_runner.go:130] ! I0420 01:55:30.893359       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0419 18:59:13.495913   14960 command_runner.go:130] ! I0420 01:55:30.893732       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 18:59:13.495913   14960 command_runner.go:130] ! E0420 01:55:30.894682       1 run.go:74] "command failed" err="finished without leader elect"
	I0419 18:59:13.508294   14960 logs.go:123] Gathering logs for kube-controller-manager [b67f2295d26c] ...
	I0419 18:59:13.508294   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67f2295d26c"
	I0419 18:59:13.544686   14960 command_runner.go:130] ! I0420 01:57:58.124915       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:13.545016   14960 command_runner.go:130] ! I0420 01:57:58.572589       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0419 18:59:13.545016   14960 command_runner.go:130] ! I0420 01:57:58.572759       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:13.545016   14960 command_runner.go:130] ! I0420 01:57:58.576545       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:13.545016   14960 command_runner.go:130] ! I0420 01:57:58.576765       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:13.545097   14960 command_runner.go:130] ! I0420 01:57:58.577138       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0419 18:59:13.545097   14960 command_runner.go:130] ! I0420 01:57:58.577308       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:13.545097   14960 command_runner.go:130] ! I0420 01:58:02.671844       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0419 18:59:13.545097   14960 command_runner.go:130] ! I0420 01:58:02.672396       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0419 18:59:13.545097   14960 command_runner.go:130] ! I0420 01:58:02.683222       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0419 18:59:13.545169   14960 command_runner.go:130] ! I0420 01:58:02.683502       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0419 18:59:13.545169   14960 command_runner.go:130] ! I0420 01:58:02.683748       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0419 18:59:13.545169   14960 command_runner.go:130] ! I0420 01:58:02.684992       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0419 18:59:13.545169   14960 command_runner.go:130] ! I0420 01:58:02.685159       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.689572       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.693653       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.694118       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.694295       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.695565       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.695757       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.700089       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.700328       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.700370       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.708704       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.712057       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.712325       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.712551       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0419 18:59:13.545237   14960 command_runner.go:130] ! E0420 01:58:02.728628       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.728672       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! E0420 01:58:02.742147       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0419 18:59:13.545775   14960 command_runner.go:130] ! I0420 01:58:02.742194       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0419 18:59:13.545775   14960 command_runner.go:130] ! I0420 01:58:02.742206       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0419 18:59:13.545876   14960 command_runner.go:130] ! I0420 01:58:02.748098       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0419 18:59:13.545876   14960 command_runner.go:130] ! I0420 01:58:02.748399       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0419 18:59:13.545876   14960 command_runner.go:130] ! I0420 01:58:02.748420       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.752218       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.752332       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.752344       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.765569       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.765610       1 shared_informer.go:313] Waiting for caches to sync for job
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.765645       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.772658       1 shared_informer.go:320] Caches are synced for tokens
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.773270       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.773287       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.786700       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.788042       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.799412       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.804126       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.804238       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.814226       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.818062       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.818127       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.868296       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.868361       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.868379       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.870217       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.873404       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.873440       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! W0420 01:58:02.873461       1 shared_informer.go:597] resyncPeriod 18h17m32.022460908s is smaller than resyncCheckPeriod 19h9m29.930546571s and the informer has already started. Changing it to 19h9m29.930546571s
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.873587       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.873612       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.873690       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.873722       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0419 18:59:13.546454   14960 command_runner.go:130] ! I0420 01:58:02.873768       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0419 18:59:13.546454   14960 command_runner.go:130] ! I0420 01:58:02.873784       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.873852       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.873883       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.873963       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.873989       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.874019       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.874045       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.874084       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.874104       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.874180       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.874255       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.874269       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.874289       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.910217       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.910746       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.912220       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.928174       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.928508       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.928473       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.929874       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.931641       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.931894       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.932890       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.934333       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.934546       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.934881       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.939106       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.939460       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:12.968845       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:12.968916       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:12.969733       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:12.969944       1 shared_informer.go:313] Waiting for caches to sync for node
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:12.975888       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0419 18:59:13.547019   14960 command_runner.go:130] ! I0420 01:58:12.977148       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0419 18:59:13.547019   14960 command_runner.go:130] ! I0420 01:58:12.977216       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0419 18:59:13.547019   14960 command_runner.go:130] ! I0420 01:58:12.978712       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0419 18:59:13.547019   14960 command_runner.go:130] ! I0420 01:58:12.979007       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0419 18:59:13.547096   14960 command_runner.go:130] ! I0420 01:58:12.979040       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0419 18:59:13.547121   14960 command_runner.go:130] ! I0420 01:58:12.982094       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0419 18:59:13.547152   14960 command_runner.go:130] ! I0420 01:58:12.982639       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0419 18:59:13.547152   14960 command_runner.go:130] ! I0420 01:58:12.982957       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0419 18:59:13.547152   14960 command_runner.go:130] ! I0420 01:58:13.032307       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0419 18:59:13.547190   14960 command_runner.go:130] ! I0420 01:58:13.032749       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0419 18:59:13.547190   14960 command_runner.go:130] ! I0420 01:58:13.035306       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.036848       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.037653       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.038965       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.039366       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.039352       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.040679       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.040782       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.040908       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.041738       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.041781       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.042295       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.041839       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.042314       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.041850       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.042715       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.046953       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.047617       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.047660       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.047670       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.050144       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.050286       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.050982       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.051033       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.051061       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.054294       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.054709       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.054987       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0419 18:59:13.547776   14960 command_runner.go:130] ! I0420 01:58:13.057961       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0419 18:59:13.547776   14960 command_runner.go:130] ! I0420 01:58:13.058399       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0419 18:59:13.547776   14960 command_runner.go:130] ! I0420 01:58:13.058606       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0419 18:59:13.547776   14960 command_runner.go:130] ! I0420 01:58:13.060766       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:13.547776   14960 command_runner.go:130] ! I0420 01:58:13.061307       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0419 18:59:13.547776   14960 command_runner.go:130] ! I0420 01:58:13.060852       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:13.547921   14960 command_runner.go:130] ! I0420 01:58:13.061691       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0419 18:59:13.547945   14960 command_runner.go:130] ! I0420 01:58:13.064061       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0419 18:59:13.547945   14960 command_runner.go:130] ! I0420 01:58:13.064698       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0419 18:59:13.547945   14960 command_runner.go:130] ! I0420 01:58:13.065134       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0419 18:59:13.548006   14960 command_runner.go:130] ! I0420 01:58:13.067945       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0419 18:59:13.548006   14960 command_runner.go:130] ! I0420 01:58:13.068315       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0419 18:59:13.548006   14960 command_runner.go:130] ! I0420 01:58:13.068613       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0419 18:59:13.548052   14960 command_runner.go:130] ! I0420 01:58:13.077312       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0419 18:59:13.548089   14960 command_runner.go:130] ! I0420 01:58:13.077939       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0419 18:59:13.548089   14960 command_runner.go:130] ! I0420 01:58:13.078050       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.078623       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.083275       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.083591       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.083702       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.090751       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.091149       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.091393       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.091591       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.096868       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.097085       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.100720       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.101287       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.101375       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.103459       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.106949       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.107026       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.116002       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.139685       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.148344       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.152489       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.140934       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.151083       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000\" does not exist"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.141105       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.156086       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.156676       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m02\" does not exist"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.156750       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m03\" does not exist"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.156865       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.142425       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0419 18:59:13.548654   14960 command_runner.go:130] ! I0420 01:58:13.157020       1 shared_informer.go:320] Caches are synced for expand
	I0419 18:59:13.548654   14960 command_runner.go:130] ! I0420 01:58:13.159992       1 shared_informer.go:320] Caches are synced for ephemeral
	I0419 18:59:13.548654   14960 command_runner.go:130] ! I0420 01:58:13.145957       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:13.548654   14960 command_runner.go:130] ! I0420 01:58:13.162320       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0419 18:59:13.548654   14960 command_runner.go:130] ! I0420 01:58:13.165325       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0419 18:59:13.548654   14960 command_runner.go:130] ! I0420 01:58:13.165759       1 shared_informer.go:320] Caches are synced for job
	I0419 18:59:13.548654   14960 command_runner.go:130] ! I0420 01:58:13.169537       1 shared_informer.go:320] Caches are synced for service account
	I0419 18:59:13.548654   14960 command_runner.go:130] ! I0420 01:58:13.171293       1 shared_informer.go:320] Caches are synced for node
	I0419 18:59:13.548654   14960 command_runner.go:130] ! I0420 01:58:13.178178       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.178222       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.178230       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.178237       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.178270       1 shared_informer.go:320] Caches are synced for attach detach
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.179699       1 shared_informer.go:320] Caches are synced for PV protection
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.183856       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.183905       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.188521       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.195859       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.200417       1 shared_informer.go:320] Caches are synced for crt configmap
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.201881       1 shared_informer.go:320] Caches are synced for persistent volume
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.204647       1 shared_informer.go:320] Caches are synced for endpoint
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.207356       1 shared_informer.go:320] Caches are synced for PVC protection
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.213532       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.214173       1 shared_informer.go:320] Caches are synced for namespace
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.219105       1 shared_informer.go:320] Caches are synced for GC
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.228919       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.535929ms"
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.230155       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.901µs"
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.230170       1 shared_informer.go:320] Caches are synced for HPA
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.234086       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.236046       1 shared_informer.go:320] Caches are synced for TTL
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.240266       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.682408ms"
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.240992       1 shared_informer.go:320] Caches are synced for deployment
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.243741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="114.104µs"
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.248776       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.252859       1 shared_informer.go:320] Caches are synced for daemon sets
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.253008       1 shared_informer.go:320] Caches are synced for taint
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.259997       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.297486       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000"
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.297542       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m02"
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.297627       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m03"
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.297865       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0419 18:59:13.549304   14960 command_runner.go:130] ! I0420 01:58:13.335459       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0419 18:59:13.549304   14960 command_runner.go:130] ! I0420 01:58:13.374436       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:13.549304   14960 command_runner.go:130] ! I0420 01:58:13.389294       1 shared_informer.go:320] Caches are synced for cronjob
	I0419 18:59:13.549347   14960 command_runner.go:130] ! I0420 01:58:13.392315       1 shared_informer.go:320] Caches are synced for disruption
	I0419 18:59:13.549347   14960 command_runner.go:130] ! I0420 01:58:13.397172       1 shared_informer.go:320] Caches are synced for stateful set
	I0419 18:59:13.549347   14960 command_runner.go:130] ! I0420 01:58:13.416186       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:13.549347   14960 command_runner.go:130] ! I0420 01:58:13.857437       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:13.549347   14960 command_runner.go:130] ! I0420 01:58:13.878325       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:13.549347   14960 command_runner.go:130] ! I0420 01:58:13.878534       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0419 18:59:13.549441   14960 command_runner.go:130] ! I0420 01:58:40.290168       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.549462   14960 command_runner.go:130] ! I0420 01:58:53.395955       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.694507ms"
	I0419 18:59:13.549462   14960 command_runner.go:130] ! I0420 01:58:53.396146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.003µs"
	I0419 18:59:13.549462   14960 command_runner.go:130] ! I0420 01:59:07.033370       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.713655ms"
	I0419 18:59:13.549529   14960 command_runner.go:130] ! I0420 01:59:07.033533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.092µs"
	I0419 18:59:13.549603   14960 command_runner.go:130] ! I0420 01:59:07.047220       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.391µs"
	I0419 18:59:13.549603   14960 command_runner.go:130] ! I0420 01:59:07.121391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.338984ms"
	I0419 18:59:13.549603   14960 command_runner.go:130] ! I0420 01:59:07.121503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.691µs"
	I0419 18:59:13.566217   14960 logs.go:123] Gathering logs for dmesg ...
	I0419 18:59:13.566217   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 18:59:13.592389   14960 command_runner.go:130] > [Apr20 01:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0419 18:59:13.592476   14960 command_runner.go:130] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0419 18:59:13.592476   14960 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0419 18:59:13.592476   14960 command_runner.go:130] > [  +0.134823] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0419 18:59:13.592572   14960 command_runner.go:130] > [  +0.023006] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0419 18:59:13.592572   14960 command_runner.go:130] > [  +0.000006] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0419 18:59:13.592572   14960 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.065433] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.022829] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0419 18:59:13.592628   14960 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +5.461945] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.733998] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +1.817887] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +7.031305] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0419 18:59:13.592628   14960 command_runner.go:130] > [Apr20 01:57] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.209815] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [ +26.622359] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.115734] kauditd_printk_skb: 73 callbacks suppressed
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.605928] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.209234] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.243987] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +2.954231] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.209781] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.225214] systemd-fstab-generator[1255]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.313735] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.929646] systemd-fstab-generator[1383]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.108494] kauditd_printk_skb: 205 callbacks suppressed
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +3.650728] systemd-fstab-generator[1520]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +1.371725] kauditd_printk_skb: 49 callbacks suppressed
	I0419 18:59:13.592628   14960 command_runner.go:130] > [Apr20 01:58] kauditd_printk_skb: 25 callbacks suppressed
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +3.878920] systemd-fstab-generator[2324]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +7.552702] kauditd_printk_skb: 70 callbacks suppressed
	I0419 18:59:13.594819   14960 logs.go:123] Gathering logs for describe nodes ...
	I0419 18:59:13.594819   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 18:59:13.827694   14960 command_runner.go:130] > Name:               multinode-348000
	I0419 18:59:13.827694   14960 command_runner.go:130] > Roles:              control-plane
	I0419 18:59:13.827694   14960 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     kubernetes.io/hostname=multinode-348000
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     kubernetes.io/os=linux
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     minikube.k8s.io/name=multinode-348000
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_04_19T18_35_09_0700
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0419 18:59:13.827694   14960 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0419 18:59:13.827694   14960 command_runner.go:130] > CreationTimestamp:  Sat, 20 Apr 2024 01:35:05 +0000
	I0419 18:59:13.827694   14960 command_runner.go:130] > Taints:             <none>
	I0419 18:59:13.827694   14960 command_runner.go:130] > Unschedulable:      false
	I0419 18:59:13.827694   14960 command_runner.go:130] > Lease:
	I0419 18:59:13.827694   14960 command_runner.go:130] >   HolderIdentity:  multinode-348000
	I0419 18:59:13.827694   14960 command_runner.go:130] >   AcquireTime:     <unset>
	I0419 18:59:13.827694   14960 command_runner.go:130] >   RenewTime:       Sat, 20 Apr 2024 01:59:11 +0000
	I0419 18:59:13.827694   14960 command_runner.go:130] > Conditions:
	I0419 18:59:13.827694   14960 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0419 18:59:13.827694   14960 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0419 18:59:13.827694   14960 command_runner.go:130] >   MemoryPressure   False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0419 18:59:13.827694   14960 command_runner.go:130] >   DiskPressure     False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0419 18:59:13.828224   14960 command_runner.go:130] >   PIDPressure      False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0419 18:59:13.828224   14960 command_runner.go:130] >   Ready            True    Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:58:40 +0000   KubeletReady                 kubelet is posting ready status
	I0419 18:59:13.828309   14960 command_runner.go:130] > Addresses:
	I0419 18:59:13.828309   14960 command_runner.go:130] >   InternalIP:  172.19.42.24
	I0419 18:59:13.828309   14960 command_runner.go:130] >   Hostname:    multinode-348000
	I0419 18:59:13.828309   14960 command_runner.go:130] > Capacity:
	I0419 18:59:13.828309   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:13.828309   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:13.828309   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:13.828383   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:13.828383   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:13.828416   14960 command_runner.go:130] > Allocatable:
	I0419 18:59:13.828416   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:13.828416   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:13.828416   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:13.828470   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:13.828486   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:13.828508   14960 command_runner.go:130] > System Info:
	I0419 18:59:13.828508   14960 command_runner.go:130] >   Machine ID:                 bd21fc8af31a4161a4396c16b70a2fc3
	I0419 18:59:13.828508   14960 command_runner.go:130] >   System UUID:                fdc3fb6e-1818-9a4e-b496-b7ed0124a8e6
	I0419 18:59:13.828508   14960 command_runner.go:130] >   Boot ID:                    047b982b-9f97-4a1a-8f8a-a308f369753b
	I0419 18:59:13.828558   14960 command_runner.go:130] >   Kernel Version:             5.10.207
	I0419 18:59:13.828558   14960 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0419 18:59:13.828558   14960 command_runner.go:130] >   Operating System:           linux
	I0419 18:59:13.828591   14960 command_runner.go:130] >   Architecture:               amd64
	I0419 18:59:13.828591   14960 command_runner.go:130] >   Container Runtime Version:  docker://26.0.1
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0419 18:59:13.828620   14960 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0419 18:59:13.828620   14960 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0419 18:59:13.828620   14960 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0419 18:59:13.828620   14960 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0419 18:59:13.828620   14960 command_runner.go:130] >   default                     busybox-fc5497c4f-xnz2k                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0419 18:59:13.828620   14960 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-7w477                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0419 18:59:13.828620   14960 command_runner.go:130] >   kube-system                 etcd-multinode-348000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         72s
	I0419 18:59:13.828620   14960 command_runner.go:130] >   kube-system                 kindnet-s4fsr                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0419 18:59:13.828620   14960 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-348000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         72s
	I0419 18:59:13.828620   14960 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-348000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0419 18:59:13.828620   14960 command_runner.go:130] >   kube-system                 kube-proxy-kj76x                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0419 18:59:13.828620   14960 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-348000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0419 18:59:13.828620   14960 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0419 18:59:13.828620   14960 command_runner.go:130] > Allocated resources:
	I0419 18:59:13.828620   14960 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Resource           Requests     Limits
	I0419 18:59:13.828620   14960 command_runner.go:130] >   --------           --------     ------
	I0419 18:59:13.828620   14960 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0419 18:59:13.828620   14960 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0419 18:59:13.828620   14960 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0419 18:59:13.828620   14960 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0419 18:59:13.828620   14960 command_runner.go:130] > Events:
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0419 18:59:13.828620   14960 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  Starting                 70s                kube-proxy       
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-348000 status is now: NodeHasSufficientPID
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-348000 status is now: NodeHasSufficientMemory
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-348000 status is now: NodeHasNoDiskPressure
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-348000 event: Registered Node multinode-348000 in Controller
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-348000 status is now: NodeReady
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  Starting                 78s                kubelet          Starting kubelet.
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node multinode-348000 status is now: NodeHasSufficientMemory
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node multinode-348000 status is now: NodeHasNoDiskPressure
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     78s (x7 over 78s)  kubelet          Node multinode-348000 status is now: NodeHasSufficientPID
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-348000 event: Registered Node multinode-348000 in Controller
	I0419 18:59:13.828620   14960 command_runner.go:130] > Name:               multinode-348000-m02
	I0419 18:59:13.828620   14960 command_runner.go:130] > Roles:              <none>
	I0419 18:59:13.829198   14960 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0419 18:59:13.829198   14960 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0419 18:59:13.829198   14960 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0419 18:59:13.829198   14960 command_runner.go:130] >                     kubernetes.io/hostname=multinode-348000-m02
	I0419 18:59:13.829198   14960 command_runner.go:130] >                     kubernetes.io/os=linux
	I0419 18:59:13.829246   14960 command_runner.go:130] >                     minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	I0419 18:59:13.829246   14960 command_runner.go:130] >                     minikube.k8s.io/name=multinode-348000
	I0419 18:59:13.829246   14960 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0419 18:59:13.829246   14960 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_04_19T18_38_19_0700
	I0419 18:59:13.829246   14960 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0419 18:59:13.829246   14960 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0419 18:59:13.829246   14960 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0419 18:59:13.829246   14960 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0419 18:59:13.829246   14960 command_runner.go:130] > CreationTimestamp:  Sat, 20 Apr 2024 01:38:18 +0000
	I0419 18:59:13.829355   14960 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0419 18:59:13.829355   14960 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0419 18:59:13.829355   14960 command_runner.go:130] > Unschedulable:      false
	I0419 18:59:13.829355   14960 command_runner.go:130] > Lease:
	I0419 18:59:13.829396   14960 command_runner.go:130] >   HolderIdentity:  multinode-348000-m02
	I0419 18:59:13.829396   14960 command_runner.go:130] >   AcquireTime:     <unset>
	I0419 18:59:13.829396   14960 command_runner.go:130] >   RenewTime:       Sat, 20 Apr 2024 01:54:49 +0000
	I0419 18:59:13.829396   14960 command_runner.go:130] > Conditions:
	I0419 18:59:13.829449   14960 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0419 18:59:13.829449   14960 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0419 18:59:13.829489   14960 command_runner.go:130] >   MemoryPressure   Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:13.829489   14960 command_runner.go:130] >   DiskPressure     Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:13.829540   14960 command_runner.go:130] >   PIDPressure      Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:13.829540   14960 command_runner.go:130] >   Ready            Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:13.829540   14960 command_runner.go:130] > Addresses:
	I0419 18:59:13.829605   14960 command_runner.go:130] >   InternalIP:  172.19.32.249
	I0419 18:59:13.829605   14960 command_runner.go:130] >   Hostname:    multinode-348000-m02
	I0419 18:59:13.829605   14960 command_runner.go:130] > Capacity:
	I0419 18:59:13.829605   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:13.829648   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:13.829648   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:13.829648   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:13.829648   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:13.829648   14960 command_runner.go:130] > Allocatable:
	I0419 18:59:13.829690   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:13.829729   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:13.829729   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:13.829771   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:13.829771   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:13.829771   14960 command_runner.go:130] > System Info:
	I0419 18:59:13.829809   14960 command_runner.go:130] >   Machine ID:                 ea453a3100b34d789441206109708446
	I0419 18:59:13.829809   14960 command_runner.go:130] >   System UUID:                9f7972f9-8942-ef4f-b0cf-029b405f5832
	I0419 18:59:13.829851   14960 command_runner.go:130] >   Boot ID:                    d8ef37df-1396-47c1-8bea-04667e5bc60b
	I0419 18:59:13.829851   14960 command_runner.go:130] >   Kernel Version:             5.10.207
	I0419 18:59:13.829851   14960 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0419 18:59:13.829851   14960 command_runner.go:130] >   Operating System:           linux
	I0419 18:59:13.829895   14960 command_runner.go:130] >   Architecture:               amd64
	I0419 18:59:13.829895   14960 command_runner.go:130] >   Container Runtime Version:  docker://26.0.1
	I0419 18:59:13.829895   14960 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0419 18:59:13.829895   14960 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0419 18:59:13.829937   14960 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0419 18:59:13.829937   14960 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0419 18:59:13.829974   14960 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0419 18:59:13.829974   14960 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0419 18:59:13.829974   14960 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0419 18:59:13.830016   14960 command_runner.go:130] >   default                     busybox-fc5497c4f-2d5hs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0419 18:59:13.830016   14960 command_runner.go:130] >   kube-system                 kindnet-s98rh              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0419 18:59:13.830016   14960 command_runner.go:130] >   kube-system                 kube-proxy-bjv9b           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0419 18:59:13.830062   14960 command_runner.go:130] > Allocated resources:
	I0419 18:59:13.830062   14960 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0419 18:59:13.830062   14960 command_runner.go:130] >   Resource           Requests   Limits
	I0419 18:59:13.830101   14960 command_runner.go:130] >   --------           --------   ------
	I0419 18:59:13.830101   14960 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0419 18:59:13.830101   14960 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0419 18:59:13.830133   14960 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0419 18:59:13.830133   14960 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0419 18:59:13.830170   14960 command_runner.go:130] > Events:
	I0419 18:59:13.830170   14960 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0419 18:59:13.830170   14960 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0419 18:59:13.830170   14960 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0419 18:59:13.830212   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-348000-m02 status is now: NodeHasSufficientMemory
	I0419 18:59:13.830254   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-348000-m02 status is now: NodeHasNoDiskPressure
	I0419 18:59:13.830254   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-348000-m02 status is now: NodeHasSufficientPID
	I0419 18:59:13.830254   14960 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-348000-m02 event: Registered Node multinode-348000-m02 in Controller
	I0419 18:59:13.830299   14960 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-348000-m02 status is now: NodeReady
	I0419 18:59:13.830336   14960 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-348000-m02 event: Registered Node multinode-348000-m02 in Controller
	I0419 18:59:13.830336   14960 command_runner.go:130] >   Normal  NodeNotReady             20s                node-controller  Node multinode-348000-m02 status is now: NodeNotReady
	I0419 18:59:13.830336   14960 command_runner.go:130] > Name:               multinode-348000-m03
	I0419 18:59:13.830336   14960 command_runner.go:130] > Roles:              <none>
	I0419 18:59:13.830378   14960 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0419 18:59:13.830378   14960 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0419 18:59:13.830378   14960 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0419 18:59:13.830414   14960 command_runner.go:130] >                     kubernetes.io/hostname=multinode-348000-m03
	I0419 18:59:13.830414   14960 command_runner.go:130] >                     kubernetes.io/os=linux
	I0419 18:59:13.830414   14960 command_runner.go:130] >                     minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	I0419 18:59:13.830414   14960 command_runner.go:130] >                     minikube.k8s.io/name=multinode-348000
	I0419 18:59:13.830473   14960 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0419 18:59:13.830473   14960 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_04_19T18_53_29_0700
	I0419 18:59:13.830473   14960 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0419 18:59:13.830473   14960 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0419 18:59:13.830473   14960 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0419 18:59:13.830515   14960 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0419 18:59:13.830515   14960 command_runner.go:130] > CreationTimestamp:  Sat, 20 Apr 2024 01:53:28 +0000
	I0419 18:59:13.830515   14960 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0419 18:59:13.830567   14960 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0419 18:59:13.830567   14960 command_runner.go:130] > Unschedulable:      false
	I0419 18:59:13.830567   14960 command_runner.go:130] > Lease:
	I0419 18:59:13.830687   14960 command_runner.go:130] >   HolderIdentity:  multinode-348000-m03
	I0419 18:59:13.830799   14960 command_runner.go:130] >   AcquireTime:     <unset>
	I0419 18:59:13.830799   14960 command_runner.go:130] >   RenewTime:       Sat, 20 Apr 2024 01:54:29 +0000
	I0419 18:59:13.830845   14960 command_runner.go:130] > Conditions:
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0419 18:59:13.830845   14960 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0419 18:59:13.830845   14960 command_runner.go:130] >   MemoryPressure   Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:13.830845   14960 command_runner.go:130] >   DiskPressure     Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:13.830845   14960 command_runner.go:130] >   PIDPressure      Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Ready            Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:13.830845   14960 command_runner.go:130] > Addresses:
	I0419 18:59:13.830845   14960 command_runner.go:130] >   InternalIP:  172.19.37.59
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Hostname:    multinode-348000-m03
	I0419 18:59:13.830845   14960 command_runner.go:130] > Capacity:
	I0419 18:59:13.830845   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:13.830845   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:13.830845   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:13.830845   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:13.830845   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:13.830845   14960 command_runner.go:130] > Allocatable:
	I0419 18:59:13.830845   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:13.830845   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:13.830845   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:13.830845   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:13.830845   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:13.830845   14960 command_runner.go:130] > System Info:
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Machine ID:                 02e45e9bf03f4852a443a43ac6a8538b
	I0419 18:59:13.830845   14960 command_runner.go:130] >   System UUID:                37a43d59-2157-6e44-8d13-6c975ea12fea
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Boot ID:                    404bc64b-d4fc-4c63-a589-8191649bdfaa
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Kernel Version:             5.10.207
	I0419 18:59:13.830845   14960 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Operating System:           linux
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Architecture:               amd64
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Container Runtime Version:  docker://26.0.1
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0419 18:59:13.830845   14960 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0419 18:59:13.830845   14960 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0419 18:59:13.830845   14960 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0419 18:59:13.830845   14960 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0419 18:59:13.830845   14960 command_runner.go:130] >   kube-system                 kindnet-mg8qs       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0419 18:59:13.830845   14960 command_runner.go:130] >   kube-system                 kube-proxy-2jjsq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0419 18:59:13.830845   14960 command_runner.go:130] > Allocated resources:
	I0419 18:59:13.830845   14960 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Resource           Requests   Limits
	I0419 18:59:13.830845   14960 command_runner.go:130] >   --------           --------   ------
	I0419 18:59:13.830845   14960 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0419 18:59:13.830845   14960 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0419 18:59:13.830845   14960 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0419 18:59:13.830845   14960 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0419 18:59:13.830845   14960 command_runner.go:130] > Events:
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0419 18:59:13.830845   14960 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0419 18:59:13.831449   14960 command_runner.go:130] >   Normal  Starting                 5m41s                  kube-proxy       
	I0419 18:59:13.831449   14960 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0419 18:59:13.831449   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:13.831523   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientMemory
	I0419 18:59:13.831639   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-348000-m03 status is now: NodeHasNoDiskPressure
	I0419 18:59:13.831639   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientPID
	I0419 18:59:13.831639   14960 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-348000-m03 status is now: NodeReady
	I0419 18:59:13.831745   14960 command_runner.go:130] >   Normal  Starting                 5m45s                  kubelet          Starting kubelet.
	I0419 18:59:13.831745   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m45s (x2 over 5m45s)  kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientMemory
	I0419 18:59:13.831783   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m45s (x2 over 5m45s)  kubelet          Node multinode-348000-m03 status is now: NodeHasNoDiskPressure
	I0419 18:59:13.831783   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m45s (x2 over 5m45s)  kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientPID
	I0419 18:59:13.831783   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m45s                  kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:13.831849   14960 command_runner.go:130] >   Normal  RegisteredNode           5m41s                  node-controller  Node multinode-348000-m03 event: Registered Node multinode-348000-m03 in Controller
	I0419 18:59:13.831849   14960 command_runner.go:130] >   Normal  NodeReady                5m37s                  kubelet          Node multinode-348000-m03 status is now: NodeReady
	I0419 18:59:13.831849   14960 command_runner.go:130] >   Normal  NodeNotReady             4m                     node-controller  Node multinode-348000-m03 status is now: NodeNotReady
	I0419 18:59:13.831849   14960 command_runner.go:130] >   Normal  RegisteredNode           60s                    node-controller  Node multinode-348000-m03 event: Registered Node multinode-348000-m03 in Controller
	I0419 18:59:13.842310   14960 logs.go:123] Gathering logs for kube-scheduler [d57aee391c14] ...
	I0419 18:59:13.842310   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57aee391c14"
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:57:58.020728       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.771749       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.771906       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.785599       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.785824       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.785929       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.785956       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.785972       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.786046       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.786323       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.786915       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.887091       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.887476       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.888293       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0419 18:59:13.878862   14960 logs.go:123] Gathering logs for kindnet [f8c798c99407] ...
	I0419 18:59:13.879010   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c798c99407"
	I0419 18:59:13.908249   14960 command_runner.go:130] ! I0420 01:58:03.441751       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0419 18:59:13.908249   14960 command_runner.go:130] ! I0420 01:58:03.511070       1 main.go:107] hostIP = 172.19.42.24
	I0419 18:59:13.908249   14960 command_runner.go:130] ! podIP = 172.19.42.24
	I0419 18:59:13.908249   14960 command_runner.go:130] ! I0420 01:58:03.513110       1 main.go:116] setting mtu 1500 for CNI 
	I0419 18:59:13.908249   14960 command_runner.go:130] ! I0420 01:58:03.513147       1 main.go:146] kindnetd IP family: "ipv4"
	I0419 18:59:13.908249   14960 command_runner.go:130] ! I0420 01:58:03.513182       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0419 18:59:13.908249   14960 command_runner.go:130] ! I0420 01:58:07.011650       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:13.908249   14960 command_runner.go:130] ! I0420 01:58:10.084231       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:13.908249   14960 command_runner.go:130] ! I0420 01:58:13.156371       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:13.908249   14960 command_runner.go:130] ! I0420 01:58:16.227521       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:13.908249   14960 command_runner.go:130] ! I0420 01:58:19.299385       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:13.908249   14960 command_runner.go:130] ! panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:13.908249   14960 command_runner.go:130] ! goroutine 1 [running]:
	I0419 18:59:13.908249   14960 command_runner.go:130] ! main.main()
	I0419 18:59:13.908249   14960 command_runner.go:130] ! 	/go/src/cmd/kindnetd/main.go:195 +0xd3d
	I0419 18:59:16.416594   14960 api_server.go:253] Checking apiserver healthz at https://172.19.42.24:8443/healthz ...
	I0419 18:59:16.424442   14960 api_server.go:279] https://172.19.42.24:8443/healthz returned 200:
	ok
	I0419 18:59:16.424924   14960 round_trippers.go:463] GET https://172.19.42.24:8443/version
	I0419 18:59:16.424924   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:16.424924   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:16.424924   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:16.426900   14960 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 18:59:16.426900   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:16.426900   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:16.426900   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:16.426900   14960 round_trippers.go:580]     Content-Length: 263
	I0419 18:59:16.426900   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:16 GMT
	I0419 18:59:16.426900   14960 round_trippers.go:580]     Audit-Id: 053dda65-737e-4062-888c-a5c46f4ce2fe
	I0419 18:59:16.426900   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:16.426900   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:16.426900   14960 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0419 18:59:16.426900   14960 api_server.go:141] control plane version: v1.30.0
	I0419 18:59:16.426900   14960 api_server.go:131] duration metric: took 3.8512548s to wait for apiserver health ...
	I0419 18:59:16.426900   14960 system_pods.go:43] waiting for kube-system pods to appear ...
	I0419 18:59:16.441380   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 18:59:16.465666   14960 command_runner.go:130] > bd3aa93bac25
	I0419 18:59:16.466162   14960 logs.go:276] 1 containers: [bd3aa93bac25]
	I0419 18:59:16.477311   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 18:59:16.502783   14960 command_runner.go:130] > 2deabe4dbdf4
	I0419 18:59:16.503808   14960 logs.go:276] 1 containers: [2deabe4dbdf4]
	I0419 18:59:16.514967   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 18:59:16.540811   14960 command_runner.go:130] > 352cf21a3e20
	I0419 18:59:16.540811   14960 command_runner.go:130] > 627b84abf45c
	I0419 18:59:16.541036   14960 logs.go:276] 2 containers: [352cf21a3e20 627b84abf45c]
	I0419 18:59:16.551329   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 18:59:16.578348   14960 command_runner.go:130] > d57aee391c14
	I0419 18:59:16.578348   14960 command_runner.go:130] > e476774b8f77
	I0419 18:59:16.578348   14960 logs.go:276] 2 containers: [d57aee391c14 e476774b8f77]
	I0419 18:59:16.589290   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 18:59:16.615892   14960 command_runner.go:130] > e438af0f1ec9
	I0419 18:59:16.616767   14960 command_runner.go:130] > a6586791413d
	I0419 18:59:16.617040   14960 logs.go:276] 2 containers: [e438af0f1ec9 a6586791413d]
	I0419 18:59:16.627464   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 18:59:16.651894   14960 command_runner.go:130] > b67f2295d26c
	I0419 18:59:16.651964   14960 command_runner.go:130] > 9638ddcd5428
	I0419 18:59:16.651964   14960 logs.go:276] 2 containers: [b67f2295d26c 9638ddcd5428]
	I0419 18:59:16.661909   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 18:59:16.688754   14960 command_runner.go:130] > ae0b21715f86
	I0419 18:59:16.688754   14960 command_runner.go:130] > f8c798c99407
	I0419 18:59:16.688917   14960 logs.go:276] 2 containers: [ae0b21715f86 f8c798c99407]
	I0419 18:59:16.688917   14960 logs.go:123] Gathering logs for kube-controller-manager [9638ddcd5428] ...
	I0419 18:59:16.689070   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9638ddcd5428"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:03.372734       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:03.812267       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:03.812307       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:03.816347       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:03.816460       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:03.817145       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:03.817250       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:07.961997       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:07.962027       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:07.977942       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:07.978602       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:07.980093       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:07.989698       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:07.990033       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:07.990321       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:08.005238       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:08.005791       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:08.006985       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:08.018816       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:08.019229       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:08.019480       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:08.046904       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.047815       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.049696       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.050007       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.062049       1 shared_informer.go:320] Caches are synced for tokens
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.065356       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.065873       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.113476       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.114130       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.116086       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.129157       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.129533       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.129568       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.165596       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.166223       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.166242       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.211668       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.211749       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.211766       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.232421       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.232496       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.232934       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.232991       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502058       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502113       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! W0420 01:35:08.502140       1 shared_informer.go:597] resyncPeriod 21h44m16.388395173s is smaller than resyncCheckPeriod 22h35m59.940993284s and the informer has already started. Changing it to 22h35m59.940993284s
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502208       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502278       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502298       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502314       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502330       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502351       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502407       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502437       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502458       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502479       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502501       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! W0420 01:35:08.502514       1 shared_informer.go:597] resyncPeriod 19h4m59.465157498s is smaller than resyncCheckPeriod 22h35m59.940993284s and the informer has already started. Changing it to 22h35m59.940993284s
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502638       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502666       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502684       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502713       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502732       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502771       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502793       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502820       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.503928       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.503949       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.504053       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.534828       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.534961       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.674769       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.675139       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.675159       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.825012       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:08.825352       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:08.825549       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.067591       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.068206       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.068502       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.068578       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.320310       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.320746       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.321134       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.516184       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.516262       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.691568       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.693516       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.693713       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.694525       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.933130       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.933168       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.936074       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.217647       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.218375       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.218475       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.267124       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.267436       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.267570       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.268204       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.268422       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0419 18:59:16.724852   14960 command_runner.go:130] ! E0420 01:35:10.316394       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.316683       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.472792       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.472905       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.472918       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.624680       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.624742       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.624753       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.772273       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.772422       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.773389       1 shared_informer.go:313] Waiting for caches to sync for job
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.922317       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.922464       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.922478       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.070777       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.071059       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.071119       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.071166       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.071195       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.071205       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.222012       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.222056       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.222746       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.372624       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.372812       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.372965       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.522757       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.522983       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.523000       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.671210       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.671410       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.671429       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.820688       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.821596       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.821935       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0419 18:59:16.724852   14960 command_runner.go:130] ! E0420 01:35:11.971137       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.971301       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.971316       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.971323       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.121255       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.121746       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.121947       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.274169       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.274383       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.274402       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.318009       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.318126       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.318164       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.318524       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.318628       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.318650       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.319568       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.319800       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.319996       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.320096       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.320128       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.320161       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.320270       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.381189       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.381256       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.381472       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.381508       1 shared_informer.go:313] Waiting for caches to sync for node
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.395580       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.395660       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.396587       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.396886       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.405182       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.428741       1 shared_informer.go:320] Caches are synced for service account
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.430037       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.433041       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.440027       1 shared_informer.go:320] Caches are synced for namespace
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.466474       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.469554       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.477923       1 shared_informer.go:320] Caches are synced for PV protection
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.479748       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.479794       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.480700       1 shared_informer.go:320] Caches are synced for PVC protection
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.492034       1 shared_informer.go:320] Caches are synced for expand
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.492084       1 shared_informer.go:320] Caches are synced for endpoint
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.492130       1 shared_informer.go:320] Caches are synced for job
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.497920       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.498399       1 shared_informer.go:320] Caches are synced for node
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.498473       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.498515       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.498526       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.498531       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.508187       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000\" does not exist"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.508396       1 shared_informer.go:320] Caches are synced for GC
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.512585       1 shared_informer.go:320] Caches are synced for crt configmap
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.520820       1 shared_informer.go:320] Caches are synced for daemon sets
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.521073       1 shared_informer.go:320] Caches are synced for stateful set
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.521189       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.521223       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.521268       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.527709       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.528722       1 shared_informer.go:320] Caches are synced for cronjob
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.528751       1 shared_informer.go:320] Caches are synced for ephemeral
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.528767       1 shared_informer.go:320] Caches are synced for TTL
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.529370       1 shared_informer.go:320] Caches are synced for HPA
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.529414       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.529477       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.529509       1 shared_informer.go:320] Caches are synced for persistent volume
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.552273       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000" podCIDRs=["10.244.0.0/24"]
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.569198       1 shared_informer.go:320] Caches are synced for taint
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.569287       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.569354       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.569429       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.574991       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.590559       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.623057       1 shared_informer.go:320] Caches are synced for deployment
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.623597       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.651041       1 shared_informer.go:320] Caches are synced for disruption
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.699011       1 shared_informer.go:320] Caches are synced for attach detach
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.705303       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.706815       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:23.168892       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:23.169115       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:23.179171       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:23.263116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="374.4156ms"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:23.291471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.172623ms"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:23.291547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.106µs"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:23.578182       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="73.803114ms"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:23.630233       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.666311ms"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:23.630467       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="183.125µs"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:36.906373       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="291.116µs"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:36.934151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="76.104µs"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:37.573034       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:39.217159       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.488µs"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:39.265403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.862669ms"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:39.266023       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="552.786µs"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:38:18.575680       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m02\" does not exist"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:38:18.590900       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m02" podCIDRs=["10.244.1.0/24"]
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:38:22.613051       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m02"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:38:37.669535       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:39:03.031296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.090021ms"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:39:03.053897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.363721ms"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:39:03.054543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.499µs"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:39:05.783927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.434034ms"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:39:05.784276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="108.901µs"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:39:07.103598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.163039ms"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:39:07.104054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.4µs"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:42:52.390190       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:42:52.390530       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m03\" does not exist"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:42:52.403944       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m03" podCIDRs=["10.244.2.0/24"]
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:42:52.676079       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m03"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:43:11.211743       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:50:42.818871       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.727856   14960 command_runner.go:130] ! I0420 01:53:22.621370       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.727856   14960 command_runner.go:130] ! I0420 01:53:28.752017       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m03\" does not exist"
	I0419 18:59:16.727856   14960 command_runner.go:130] ! I0420 01:53:28.753300       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.727856   14960 command_runner.go:130] ! I0420 01:53:28.789161       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m03" podCIDRs=["10.244.3.0/24"]
	I0419 18:59:16.727856   14960 command_runner.go:130] ! I0420 01:53:36.097701       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m03"
	I0419 18:59:16.727856   14960 command_runner.go:130] ! I0420 01:55:13.205537       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.745853   14960 logs.go:123] Gathering logs for dmesg ...
	I0419 18:59:16.745853   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 18:59:16.771889   14960 command_runner.go:130] > [Apr20 01:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0419 18:59:16.771889   14960 command_runner.go:130] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0419 18:59:16.771889   14960 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0419 18:59:16.771889   14960 command_runner.go:130] > [  +0.134823] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0419 18:59:16.771889   14960 command_runner.go:130] > [  +0.023006] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0419 18:59:16.771889   14960 command_runner.go:130] > [  +0.000006] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0419 18:59:16.771889   14960 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0419 18:59:16.771889   14960 command_runner.go:130] > [  +0.065433] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0419 18:59:16.771889   14960 command_runner.go:130] > [  +0.022829] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0419 18:59:16.771889   14960 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0419 18:59:16.771889   14960 command_runner.go:130] > [  +5.461945] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.733998] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +1.817887] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +7.031305] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0419 18:59:16.772867   14960 command_runner.go:130] > [Apr20 01:57] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.209815] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [ +26.622359] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.115734] kauditd_printk_skb: 73 callbacks suppressed
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.605928] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.209234] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.243987] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +2.954231] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.209781] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.225214] systemd-fstab-generator[1255]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.313735] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.929646] systemd-fstab-generator[1383]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.108494] kauditd_printk_skb: 205 callbacks suppressed
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +3.650728] systemd-fstab-generator[1520]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +1.371725] kauditd_printk_skb: 49 callbacks suppressed
	I0419 18:59:16.772867   14960 command_runner.go:130] > [Apr20 01:58] kauditd_printk_skb: 25 callbacks suppressed
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +3.878920] systemd-fstab-generator[2324]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +7.552702] kauditd_printk_skb: 70 callbacks suppressed
	I0419 18:59:16.773857   14960 logs.go:123] Gathering logs for kube-scheduler [d57aee391c14] ...
	I0419 18:59:16.773857   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57aee391c14"
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:57:58.020728       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.771749       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.771906       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.785599       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.785824       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.785929       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.785956       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.785972       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.786046       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.786323       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.786915       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.887091       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.887476       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.888293       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0419 18:59:16.806462   14960 logs.go:123] Gathering logs for kube-controller-manager [b67f2295d26c] ...
	I0419 18:59:16.806462   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67f2295d26c"
	I0419 18:59:16.839657   14960 command_runner.go:130] ! I0420 01:57:58.124915       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:16.839780   14960 command_runner.go:130] ! I0420 01:57:58.572589       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0419 18:59:16.839780   14960 command_runner.go:130] ! I0420 01:57:58.572759       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:16.839780   14960 command_runner.go:130] ! I0420 01:57:58.576545       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:16.839780   14960 command_runner.go:130] ! I0420 01:57:58.576765       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:16.839850   14960 command_runner.go:130] ! I0420 01:57:58.577138       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0419 18:59:16.839850   14960 command_runner.go:130] ! I0420 01:57:58.577308       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:16.839850   14960 command_runner.go:130] ! I0420 01:58:02.671844       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0419 18:59:16.839850   14960 command_runner.go:130] ! I0420 01:58:02.672396       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0419 18:59:16.839850   14960 command_runner.go:130] ! I0420 01:58:02.683222       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0419 18:59:16.839931   14960 command_runner.go:130] ! I0420 01:58:02.683502       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0419 18:59:16.839931   14960 command_runner.go:130] ! I0420 01:58:02.683748       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0419 18:59:16.839931   14960 command_runner.go:130] ! I0420 01:58:02.684992       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0419 18:59:16.839931   14960 command_runner.go:130] ! I0420 01:58:02.685159       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0419 18:59:16.840000   14960 command_runner.go:130] ! I0420 01:58:02.689572       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0419 18:59:16.840000   14960 command_runner.go:130] ! I0420 01:58:02.693653       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0419 18:59:16.840000   14960 command_runner.go:130] ! I0420 01:58:02.694118       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0419 18:59:16.840000   14960 command_runner.go:130] ! I0420 01:58:02.694295       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0419 18:59:16.840000   14960 command_runner.go:130] ! I0420 01:58:02.695565       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0419 18:59:16.840072   14960 command_runner.go:130] ! I0420 01:58:02.695757       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0419 18:59:16.840072   14960 command_runner.go:130] ! I0420 01:58:02.700089       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0419 18:59:16.840106   14960 command_runner.go:130] ! I0420 01:58:02.700328       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0419 18:59:16.840106   14960 command_runner.go:130] ! I0420 01:58:02.700370       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0419 18:59:16.840153   14960 command_runner.go:130] ! I0420 01:58:02.708704       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0419 18:59:16.840153   14960 command_runner.go:130] ! I0420 01:58:02.712057       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0419 18:59:16.840153   14960 command_runner.go:130] ! I0420 01:58:02.712325       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0419 18:59:16.840202   14960 command_runner.go:130] ! I0420 01:58:02.712551       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0419 18:59:16.840202   14960 command_runner.go:130] ! E0420 01:58:02.728628       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.728672       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! E0420 01:58:02.742147       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.742194       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.742206       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.748098       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.748399       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.748420       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.752218       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.752332       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.752344       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.765569       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.765610       1 shared_informer.go:313] Waiting for caches to sync for job
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.765645       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.772658       1 shared_informer.go:320] Caches are synced for tokens
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.773270       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.773287       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.786700       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.788042       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.799412       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.804126       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.804238       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.814226       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.818062       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.818127       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.868296       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.868361       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.868379       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.870217       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.873404       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.873440       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! W0420 01:58:02.873461       1 shared_informer.go:597] resyncPeriod 18h17m32.022460908s is smaller than resyncCheckPeriod 19h9m29.930546571s and the informer has already started. Changing it to 19h9m29.930546571s
	I0419 18:59:16.840807   14960 command_runner.go:130] ! I0420 01:58:02.873587       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0419 18:59:16.840807   14960 command_runner.go:130] ! I0420 01:58:02.873612       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0419 18:59:16.840807   14960 command_runner.go:130] ! I0420 01:58:02.873690       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0419 18:59:16.840889   14960 command_runner.go:130] ! I0420 01:58:02.873722       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0419 18:59:16.840933   14960 command_runner.go:130] ! I0420 01:58:02.873768       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0419 18:59:16.840933   14960 command_runner.go:130] ! I0420 01:58:02.873784       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0419 18:59:16.840933   14960 command_runner.go:130] ! I0420 01:58:02.873852       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0419 18:59:16.840988   14960 command_runner.go:130] ! I0420 01:58:02.873883       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0419 18:59:16.841022   14960 command_runner.go:130] ! I0420 01:58:02.873963       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0419 18:59:16.841022   14960 command_runner.go:130] ! I0420 01:58:02.873989       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0419 18:59:16.841071   14960 command_runner.go:130] ! I0420 01:58:02.874019       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0419 18:59:16.841071   14960 command_runner.go:130] ! I0420 01:58:02.874045       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0419 18:59:16.841139   14960 command_runner.go:130] ! I0420 01:58:02.874084       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0419 18:59:16.841139   14960 command_runner.go:130] ! I0420 01:58:02.874104       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0419 18:59:16.841139   14960 command_runner.go:130] ! I0420 01:58:02.874180       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0419 18:59:16.841209   14960 command_runner.go:130] ! I0420 01:58:02.874255       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0419 18:59:16.841261   14960 command_runner.go:130] ! I0420 01:58:02.874269       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:16.841261   14960 command_runner.go:130] ! I0420 01:58:02.874289       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0419 18:59:16.841857   14960 command_runner.go:130] ! I0420 01:58:02.910217       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0419 18:59:16.841857   14960 command_runner.go:130] ! I0420 01:58:02.910746       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0419 18:59:16.841857   14960 command_runner.go:130] ! I0420 01:58:02.912220       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0419 18:59:16.841857   14960 command_runner.go:130] ! I0420 01:58:02.928174       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0419 18:59:16.841857   14960 command_runner.go:130] ! I0420 01:58:02.928508       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0419 18:59:16.841857   14960 command_runner.go:130] ! I0420 01:58:02.928473       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0419 18:59:16.841857   14960 command_runner.go:130] ! I0420 01:58:02.929874       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0419 18:59:16.841857   14960 command_runner.go:130] ! I0420 01:58:02.931641       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0419 18:59:16.841857   14960 command_runner.go:130] ! I0420 01:58:02.931894       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0419 18:59:16.841857   14960 command_runner.go:130] ! I0420 01:58:02.932890       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0419 18:59:16.842387   14960 command_runner.go:130] ! I0420 01:58:02.934333       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0419 18:59:16.842387   14960 command_runner.go:130] ! I0420 01:58:02.934546       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0419 18:59:16.842387   14960 command_runner.go:130] ! I0420 01:58:02.934881       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0419 18:59:16.842387   14960 command_runner.go:130] ! I0420 01:58:02.939106       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0419 18:59:16.842387   14960 command_runner.go:130] ! I0420 01:58:02.939460       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0419 18:59:16.842387   14960 command_runner.go:130] ! I0420 01:58:12.968845       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0419 18:59:16.842496   14960 command_runner.go:130] ! I0420 01:58:12.968916       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0419 18:59:16.842496   14960 command_runner.go:130] ! I0420 01:58:12.969733       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0419 18:59:16.842496   14960 command_runner.go:130] ! I0420 01:58:12.969944       1 shared_informer.go:313] Waiting for caches to sync for node
	I0419 18:59:16.842551   14960 command_runner.go:130] ! I0420 01:58:12.975888       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0419 18:59:16.842551   14960 command_runner.go:130] ! I0420 01:58:12.977148       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0419 18:59:16.842551   14960 command_runner.go:130] ! I0420 01:58:12.977216       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0419 18:59:16.842616   14960 command_runner.go:130] ! I0420 01:58:12.978712       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0419 18:59:16.842642   14960 command_runner.go:130] ! I0420 01:58:12.979007       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:12.979040       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:12.982094       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:12.982639       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:12.982957       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.032307       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.032749       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.035306       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.036848       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.037653       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.038965       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.039366       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.039352       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.040679       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.040782       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.040908       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.041738       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.041781       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.042295       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.041839       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.042314       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.041850       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.042715       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.046953       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.047617       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.047660       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0419 18:59:16.843210   14960 command_runner.go:130] ! I0420 01:58:13.047670       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0419 18:59:16.843210   14960 command_runner.go:130] ! I0420 01:58:13.050144       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0419 18:59:16.843210   14960 command_runner.go:130] ! I0420 01:58:13.050286       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0419 18:59:16.843357   14960 command_runner.go:130] ! I0420 01:58:13.050982       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0419 18:59:16.843357   14960 command_runner.go:130] ! I0420 01:58:13.051033       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0419 18:59:16.843666   14960 command_runner.go:130] ! I0420 01:58:13.051061       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0419 18:59:16.843698   14960 command_runner.go:130] ! I0420 01:58:13.054294       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0419 18:59:16.844099   14960 command_runner.go:130] ! I0420 01:58:13.054709       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0419 18:59:16.844794   14960 command_runner.go:130] ! I0420 01:58:13.054987       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0419 18:59:16.844833   14960 command_runner.go:130] ! I0420 01:58:13.057961       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0419 18:59:16.846005   14960 command_runner.go:130] ! I0420 01:58:13.058399       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0419 18:59:16.846005   14960 command_runner.go:130] ! I0420 01:58:13.058606       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0419 18:59:16.846539   14960 command_runner.go:130] ! I0420 01:58:13.060766       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:16.846539   14960 command_runner.go:130] ! I0420 01:58:13.061307       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0419 18:59:16.846579   14960 command_runner.go:130] ! I0420 01:58:13.060852       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:16.846579   14960 command_runner.go:130] ! I0420 01:58:13.061691       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0419 18:59:16.846579   14960 command_runner.go:130] ! I0420 01:58:13.064061       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0419 18:59:16.846664   14960 command_runner.go:130] ! I0420 01:58:13.064698       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.065134       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.067945       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.068315       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.068613       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.077312       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.077939       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.078050       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.078623       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.083275       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.083591       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.083702       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.090751       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.091149       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.091393       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.091591       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.096868       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.097085       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.100720       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.101287       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.101375       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0419 18:59:16.847247   14960 command_runner.go:130] ! I0420 01:58:13.103459       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0419 18:59:16.847444   14960 command_runner.go:130] ! I0420 01:58:13.106949       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0419 18:59:16.847571   14960 command_runner.go:130] ! I0420 01:58:13.107026       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0419 18:59:16.847571   14960 command_runner.go:130] ! I0420 01:58:13.116002       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:16.847571   14960 command_runner.go:130] ! I0420 01:58:13.139685       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0419 18:59:16.847571   14960 command_runner.go:130] ! I0420 01:58:13.148344       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.848113   14960 command_runner.go:130] ! I0420 01:58:13.152489       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.848202   14960 command_runner.go:130] ! I0420 01:58:13.140934       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0419 18:59:16.848202   14960 command_runner.go:130] ! I0420 01:58:13.151083       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000\" does not exist"
	I0419 18:59:16.848202   14960 command_runner.go:130] ! I0420 01:58:13.141105       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.156086       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.156676       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m02\" does not exist"
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.156750       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m03\" does not exist"
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.156865       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.142425       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.157020       1 shared_informer.go:320] Caches are synced for expand
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.159992       1 shared_informer.go:320] Caches are synced for ephemeral
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.145957       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.162320       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.165325       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.165759       1 shared_informer.go:320] Caches are synced for job
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.169537       1 shared_informer.go:320] Caches are synced for service account
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.171293       1 shared_informer.go:320] Caches are synced for node
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.178178       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.178222       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.178230       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.178237       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.178270       1 shared_informer.go:320] Caches are synced for attach detach
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.179699       1 shared_informer.go:320] Caches are synced for PV protection
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.183856       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.183905       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.188521       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.195859       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.200417       1 shared_informer.go:320] Caches are synced for crt configmap
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.201881       1 shared_informer.go:320] Caches are synced for persistent volume
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.204647       1 shared_informer.go:320] Caches are synced for endpoint
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.207356       1 shared_informer.go:320] Caches are synced for PVC protection
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.213532       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.214173       1 shared_informer.go:320] Caches are synced for namespace
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.219105       1 shared_informer.go:320] Caches are synced for GC
	I0419 18:59:16.849313   14960 command_runner.go:130] ! I0420 01:58:13.228919       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.535929ms"
	I0419 18:59:16.849313   14960 command_runner.go:130] ! I0420 01:58:13.230155       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.901µs"
	I0419 18:59:16.849313   14960 command_runner.go:130] ! I0420 01:58:13.230170       1 shared_informer.go:320] Caches are synced for HPA
	I0419 18:59:16.849313   14960 command_runner.go:130] ! I0420 01:58:13.234086       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0419 18:59:16.849313   14960 command_runner.go:130] ! I0420 01:58:13.236046       1 shared_informer.go:320] Caches are synced for TTL
	I0419 18:59:16.849313   14960 command_runner.go:130] ! I0420 01:58:13.240266       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.682408ms"
	I0419 18:59:16.849431   14960 command_runner.go:130] ! I0420 01:58:13.240992       1 shared_informer.go:320] Caches are synced for deployment
	I0419 18:59:16.849431   14960 command_runner.go:130] ! I0420 01:58:13.243741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="114.104µs"
	I0419 18:59:16.849431   14960 command_runner.go:130] ! I0420 01:58:13.248776       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0419 18:59:16.849431   14960 command_runner.go:130] ! I0420 01:58:13.252859       1 shared_informer.go:320] Caches are synced for daemon sets
	I0419 18:59:16.849431   14960 command_runner.go:130] ! I0420 01:58:13.253008       1 shared_informer.go:320] Caches are synced for taint
	I0419 18:59:16.849503   14960 command_runner.go:130] ! I0420 01:58:13.259997       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0419 18:59:16.849503   14960 command_runner.go:130] ! I0420 01:58:13.297486       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000"
	I0419 18:59:16.849560   14960 command_runner.go:130] ! I0420 01:58:13.297542       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m02"
	I0419 18:59:16.849560   14960 command_runner.go:130] ! I0420 01:58:13.297627       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m03"
	I0419 18:59:16.849560   14960 command_runner.go:130] ! I0420 01:58:13.297865       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0419 18:59:16.849560   14960 command_runner.go:130] ! I0420 01:58:13.335459       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0419 18:59:16.849623   14960 command_runner.go:130] ! I0420 01:58:13.374436       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:16.849623   14960 command_runner.go:130] ! I0420 01:58:13.389294       1 shared_informer.go:320] Caches are synced for cronjob
	I0419 18:59:16.849623   14960 command_runner.go:130] ! I0420 01:58:13.392315       1 shared_informer.go:320] Caches are synced for disruption
	I0419 18:59:16.849623   14960 command_runner.go:130] ! I0420 01:58:13.397172       1 shared_informer.go:320] Caches are synced for stateful set
	I0419 18:59:16.849678   14960 command_runner.go:130] ! I0420 01:58:13.416186       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:16.849678   14960 command_runner.go:130] ! I0420 01:58:13.857437       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:16.849678   14960 command_runner.go:130] ! I0420 01:58:13.878325       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:16.849735   14960 command_runner.go:130] ! I0420 01:58:13.878534       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0419 18:59:16.849735   14960 command_runner.go:130] ! I0420 01:58:40.290168       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.849804   14960 command_runner.go:130] ! I0420 01:58:53.395955       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.694507ms"
	I0419 18:59:16.849804   14960 command_runner.go:130] ! I0420 01:58:53.396146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.003µs"
	I0419 18:59:16.849804   14960 command_runner.go:130] ! I0420 01:59:07.033370       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.713655ms"
	I0419 18:59:16.849881   14960 command_runner.go:130] ! I0420 01:59:07.033533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.092µs"
	I0419 18:59:16.849881   14960 command_runner.go:130] ! I0420 01:59:07.047220       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.391µs"
	I0419 18:59:16.849936   14960 command_runner.go:130] ! I0420 01:59:07.121391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.338984ms"
	I0419 18:59:16.849936   14960 command_runner.go:130] ! I0420 01:59:07.121503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.691µs"
	I0419 18:59:16.868154   14960 logs.go:123] Gathering logs for kube-proxy [a6586791413d] ...
	I0419 18:59:16.868154   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6586791413d"
	I0419 18:59:16.901180   14960 command_runner.go:130] ! I0420 01:35:26.120497       1 server_linux.go:69] "Using iptables proxy"
	I0419 18:59:16.902068   14960 command_runner.go:130] ! I0420 01:35:26.156956       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.42.231"]
	I0419 18:59:16.902148   14960 command_runner.go:130] ! I0420 01:35:26.208282       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 18:59:16.902148   14960 command_runner.go:130] ! I0420 01:35:26.208472       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 18:59:16.902148   14960 command_runner.go:130] ! I0420 01:35:26.208501       1 server_linux.go:165] "Using iptables Proxier"
	I0419 18:59:16.902148   14960 command_runner.go:130] ! I0420 01:35:26.214693       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 18:59:16.902148   14960 command_runner.go:130] ! I0420 01:35:26.216114       1 server.go:872] "Version info" version="v1.30.0"
	I0419 18:59:16.902148   14960 command_runner.go:130] ! I0420 01:35:26.216181       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:16.902148   14960 command_runner.go:130] ! I0420 01:35:26.219192       1 config.go:192] "Starting service config controller"
	I0419 18:59:16.902148   14960 command_runner.go:130] ! I0420 01:35:26.219810       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 18:59:16.902148   14960 command_runner.go:130] ! I0420 01:35:26.220079       1 config.go:101] "Starting endpoint slice config controller"
	I0419 18:59:16.902287   14960 command_runner.go:130] ! I0420 01:35:26.220093       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 18:59:16.902287   14960 command_runner.go:130] ! I0420 01:35:26.221802       1 config.go:319] "Starting node config controller"
	I0419 18:59:16.902287   14960 command_runner.go:130] ! I0420 01:35:26.221980       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 18:59:16.902287   14960 command_runner.go:130] ! I0420 01:35:26.320313       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 18:59:16.902355   14960 command_runner.go:130] ! I0420 01:35:26.320380       1 shared_informer.go:320] Caches are synced for service config
	I0419 18:59:16.902355   14960 command_runner.go:130] ! I0420 01:35:26.322323       1 shared_informer.go:320] Caches are synced for node config
	I0419 18:59:16.904086   14960 logs.go:123] Gathering logs for kindnet [f8c798c99407] ...
	I0419 18:59:16.904086   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c798c99407"
	I0419 18:59:16.934121   14960 command_runner.go:130] ! I0420 01:58:03.441751       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0419 18:59:16.934398   14960 command_runner.go:130] ! I0420 01:58:03.511070       1 main.go:107] hostIP = 172.19.42.24
	I0419 18:59:16.934398   14960 command_runner.go:130] ! podIP = 172.19.42.24
	I0419 18:59:16.934463   14960 command_runner.go:130] ! I0420 01:58:03.513110       1 main.go:116] setting mtu 1500 for CNI 
	I0419 18:59:16.934463   14960 command_runner.go:130] ! I0420 01:58:03.513147       1 main.go:146] kindnetd IP family: "ipv4"
	I0419 18:59:16.934463   14960 command_runner.go:130] ! I0420 01:58:03.513182       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0419 18:59:16.934463   14960 command_runner.go:130] ! I0420 01:58:07.011650       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:16.934463   14960 command_runner.go:130] ! I0420 01:58:10.084231       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:16.934534   14960 command_runner.go:130] ! I0420 01:58:13.156371       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:16.934534   14960 command_runner.go:130] ! I0420 01:58:16.227521       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:16.934602   14960 command_runner.go:130] ! I0420 01:58:19.299385       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:16.934602   14960 command_runner.go:130] ! panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:16.934602   14960 command_runner.go:130] ! goroutine 1 [running]:
	I0419 18:59:16.934665   14960 command_runner.go:130] ! main.main()
	I0419 18:59:16.934665   14960 command_runner.go:130] ! 	/go/src/cmd/kindnetd/main.go:195 +0xd3d
	I0419 18:59:16.935608   14960 logs.go:123] Gathering logs for container status ...
	I0419 18:59:16.935608   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 18:59:17.009273   14960 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0419 18:59:17.009273   14960 command_runner.go:130] > d608b74b0597f       8c811b4aec35f                                                                                         12 seconds ago       Running             busybox                   1                   75ff9f4e9dde2       busybox-fc5497c4f-xnz2k
	I0419 18:59:17.009406   14960 command_runner.go:130] > 352cf21a3e202       cbb01a7bd410d                                                                                         12 seconds ago       Running             coredns                   1                   f28a1e746a9b4       coredns-7db6d8ff4d-7w477
	I0419 18:59:17.009406   14960 command_runner.go:130] > c6f350bee7762       6e38f40d628db                                                                                         32 seconds ago       Running             storage-provisioner       2                   5472c1fba3929       storage-provisioner
	I0419 18:59:17.009406   14960 command_runner.go:130] > ae0b21715f861       4950bb10b3f87                                                                                         41 seconds ago       Running             kindnet-cni               2                   b5a777eba295e       kindnet-s4fsr
	I0419 18:59:17.009406   14960 command_runner.go:130] > f8c798c994078       4950bb10b3f87                                                                                         About a minute ago   Exited              kindnet-cni               1                   b5a777eba295e       kindnet-s4fsr
	I0419 18:59:17.009507   14960 command_runner.go:130] > 45383c4290ad1       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   5472c1fba3929       storage-provisioner
	I0419 18:59:17.009507   14960 command_runner.go:130] > e438af0f1ec9e       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   09f65a6953038       kube-proxy-kj76x
	I0419 18:59:17.009507   14960 command_runner.go:130] > 2deabe4dbdf41       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   ab9ff1d906880       etcd-multinode-348000
	I0419 18:59:17.009566   14960 command_runner.go:130] > bd3aa93bac25b       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   d7052a6f04def       kube-apiserver-multinode-348000
	I0419 18:59:17.009632   14960 command_runner.go:130] > b67f2295d26ca       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   118cca57d1f54       kube-controller-manager-multinode-348000
	I0419 18:59:17.009632   14960 command_runner.go:130] > d57aee391c146       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   e8baa597c1467       kube-scheduler-multinode-348000
	I0419 18:59:17.009694   14960 command_runner.go:130] > d8afb3e1fb946       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   476e3efb38684       busybox-fc5497c4f-xnz2k
	I0419 18:59:17.009694   14960 command_runner.go:130] > 627b84abf45cd       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   2dd294415aae1       coredns-7db6d8ff4d-7w477
	I0419 18:59:17.009765   14960 command_runner.go:130] > a6586791413d0       a0bf559e280cf                                                                                         23 minutes ago       Exited              kube-proxy                0                   7935893e9f22a       kube-proxy-kj76x
	I0419 18:59:17.009765   14960 command_runner.go:130] > 9638ddcd54285       c7aad43836fa5                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   6e420625b84be       kube-controller-manager-multinode-348000
	I0419 18:59:17.009841   14960 command_runner.go:130] > e476774b8f77e       259c8277fcbbc                                                                                         24 minutes ago       Exited              kube-scheduler            0                   e5d733991bf1a       kube-scheduler-multinode-348000
	I0419 18:59:17.012000   14960 logs.go:123] Gathering logs for describe nodes ...
	I0419 18:59:17.012000   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 18:59:17.214899   14960 command_runner.go:130] > Name:               multinode-348000
	I0419 18:59:17.215860   14960 command_runner.go:130] > Roles:              control-plane
	I0419 18:59:17.215895   14960 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0419 18:59:17.215895   14960 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0419 18:59:17.215895   14960 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0419 18:59:17.215895   14960 command_runner.go:130] >                     kubernetes.io/hostname=multinode-348000
	I0419 18:59:17.215895   14960 command_runner.go:130] >                     kubernetes.io/os=linux
	I0419 18:59:17.215895   14960 command_runner.go:130] >                     minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	I0419 18:59:17.215895   14960 command_runner.go:130] >                     minikube.k8s.io/name=multinode-348000
	I0419 18:59:17.215895   14960 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0419 18:59:17.215895   14960 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_04_19T18_35_09_0700
	I0419 18:59:17.215895   14960 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0419 18:59:17.215895   14960 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0419 18:59:17.216024   14960 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0419 18:59:17.216024   14960 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0419 18:59:17.216024   14960 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0419 18:59:17.216024   14960 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0419 18:59:17.216024   14960 command_runner.go:130] > CreationTimestamp:  Sat, 20 Apr 2024 01:35:05 +0000
	I0419 18:59:17.216084   14960 command_runner.go:130] > Taints:             <none>
	I0419 18:59:17.216084   14960 command_runner.go:130] > Unschedulable:      false
	I0419 18:59:17.216084   14960 command_runner.go:130] > Lease:
	I0419 18:59:17.216084   14960 command_runner.go:130] >   HolderIdentity:  multinode-348000
	I0419 18:59:17.216084   14960 command_runner.go:130] >   AcquireTime:     <unset>
	I0419 18:59:17.216127   14960 command_runner.go:130] >   RenewTime:       Sat, 20 Apr 2024 01:59:11 +0000
	I0419 18:59:17.216127   14960 command_runner.go:130] > Conditions:
	I0419 18:59:17.216127   14960 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0419 18:59:17.216127   14960 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0419 18:59:17.216127   14960 command_runner.go:130] >   MemoryPressure   False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0419 18:59:17.216127   14960 command_runner.go:130] >   DiskPressure     False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0419 18:59:17.216127   14960 command_runner.go:130] >   PIDPressure      False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0419 18:59:17.216235   14960 command_runner.go:130] >   Ready            True    Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:58:40 +0000   KubeletReady                 kubelet is posting ready status
	I0419 18:59:17.216235   14960 command_runner.go:130] > Addresses:
	I0419 18:59:17.216235   14960 command_runner.go:130] >   InternalIP:  172.19.42.24
	I0419 18:59:17.216235   14960 command_runner.go:130] >   Hostname:    multinode-348000
	I0419 18:59:17.216235   14960 command_runner.go:130] > Capacity:
	I0419 18:59:17.216235   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:17.216235   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:17.216235   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:17.216352   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:17.216352   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:17.216352   14960 command_runner.go:130] > Allocatable:
	I0419 18:59:17.216352   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:17.216352   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:17.216352   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:17.216352   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:17.216352   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:17.216352   14960 command_runner.go:130] > System Info:
	I0419 18:59:17.216352   14960 command_runner.go:130] >   Machine ID:                 bd21fc8af31a4161a4396c16b70a2fc3
	I0419 18:59:17.216352   14960 command_runner.go:130] >   System UUID:                fdc3fb6e-1818-9a4e-b496-b7ed0124a8e6
	I0419 18:59:17.216352   14960 command_runner.go:130] >   Boot ID:                    047b982b-9f97-4a1a-8f8a-a308f369753b
	I0419 18:59:17.216352   14960 command_runner.go:130] >   Kernel Version:             5.10.207
	I0419 18:59:17.216468   14960 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0419 18:59:17.216468   14960 command_runner.go:130] >   Operating System:           linux
	I0419 18:59:17.216468   14960 command_runner.go:130] >   Architecture:               amd64
	I0419 18:59:17.216468   14960 command_runner.go:130] >   Container Runtime Version:  docker://26.0.1
	I0419 18:59:17.216468   14960 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0419 18:59:17.216468   14960 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0419 18:59:17.216468   14960 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0419 18:59:17.216468   14960 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0419 18:59:17.216468   14960 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0419 18:59:17.216468   14960 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0419 18:59:17.216468   14960 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0419 18:59:17.216468   14960 command_runner.go:130] >   default                     busybox-fc5497c4f-xnz2k                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0419 18:59:17.216593   14960 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-7w477                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0419 18:59:17.216593   14960 command_runner.go:130] >   kube-system                 etcd-multinode-348000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         76s
	I0419 18:59:17.216593   14960 command_runner.go:130] >   kube-system                 kindnet-s4fsr                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0419 18:59:17.216593   14960 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-348000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         76s
	I0419 18:59:17.216593   14960 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-348000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0419 18:59:17.216593   14960 command_runner.go:130] >   kube-system                 kube-proxy-kj76x                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0419 18:59:17.216593   14960 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-348000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0419 18:59:17.216720   14960 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0419 18:59:17.216720   14960 command_runner.go:130] > Allocated resources:
	I0419 18:59:17.216720   14960 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0419 18:59:17.216720   14960 command_runner.go:130] >   Resource           Requests     Limits
	I0419 18:59:17.216720   14960 command_runner.go:130] >   --------           --------     ------
	I0419 18:59:17.216720   14960 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0419 18:59:17.216720   14960 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0419 18:59:17.216720   14960 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0419 18:59:17.216720   14960 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0419 18:59:17.216720   14960 command_runner.go:130] > Events:
	I0419 18:59:17.216720   14960 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0419 18:59:17.216720   14960 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0419 18:59:17.216720   14960 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0419 18:59:17.216840   14960 command_runner.go:130] >   Normal  Starting                 73s                kube-proxy       
	I0419 18:59:17.216840   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-348000 status is now: NodeHasSufficientPID
	I0419 18:59:17.216840   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:17.216840   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-348000 status is now: NodeHasSufficientMemory
	I0419 18:59:17.216840   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-348000 status is now: NodeHasNoDiskPressure
	I0419 18:59:17.216840   14960 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0419 18:59:17.216945   14960 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-348000 event: Registered Node multinode-348000 in Controller
	I0419 18:59:17.216945   14960 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-348000 status is now: NodeReady
	I0419 18:59:17.216945   14960 command_runner.go:130] >   Normal  Starting                 82s                kubelet          Starting kubelet.
	I0419 18:59:17.216945   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  82s (x8 over 82s)  kubelet          Node multinode-348000 status is now: NodeHasSufficientMemory
	I0419 18:59:17.216945   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    82s (x8 over 82s)  kubelet          Node multinode-348000 status is now: NodeHasNoDiskPressure
	I0419 18:59:17.217054   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     82s (x7 over 82s)  kubelet          Node multinode-348000 status is now: NodeHasSufficientPID
	I0419 18:59:17.217054   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:17.217054   14960 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-348000 event: Registered Node multinode-348000 in Controller
	I0419 18:59:17.245375   14960 command_runner.go:130] > Name:               multinode-348000-m02
	I0419 18:59:17.245375   14960 command_runner.go:130] > Roles:              <none>
	I0419 18:59:17.245375   14960 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     kubernetes.io/hostname=multinode-348000-m02
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     kubernetes.io/os=linux
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     minikube.k8s.io/name=multinode-348000
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_04_19T18_38_19_0700
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0419 18:59:17.245375   14960 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0419 18:59:17.245375   14960 command_runner.go:130] > CreationTimestamp:  Sat, 20 Apr 2024 01:38:18 +0000
	I0419 18:59:17.245375   14960 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0419 18:59:17.245375   14960 command_runner.go:130] > Unschedulable:      false
	I0419 18:59:17.245375   14960 command_runner.go:130] > Lease:
	I0419 18:59:17.245375   14960 command_runner.go:130] >   HolderIdentity:  multinode-348000-m02
	I0419 18:59:17.245375   14960 command_runner.go:130] >   AcquireTime:     <unset>
	I0419 18:59:17.245375   14960 command_runner.go:130] >   RenewTime:       Sat, 20 Apr 2024 01:54:49 +0000
	I0419 18:59:17.245375   14960 command_runner.go:130] > Conditions:
	I0419 18:59:17.245375   14960 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0419 18:59:17.245375   14960 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0419 18:59:17.245375   14960 command_runner.go:130] >   MemoryPressure   Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:17.245375   14960 command_runner.go:130] >   DiskPressure     Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:17.245375   14960 command_runner.go:130] >   PIDPressure      Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:17.245375   14960 command_runner.go:130] >   Ready            Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:17.245375   14960 command_runner.go:130] > Addresses:
	I0419 18:59:17.245945   14960 command_runner.go:130] >   InternalIP:  172.19.32.249
	I0419 18:59:17.246090   14960 command_runner.go:130] >   Hostname:    multinode-348000-m02
	I0419 18:59:17.246090   14960 command_runner.go:130] > Capacity:
	I0419 18:59:17.246156   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:17.246156   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:17.246156   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:17.246156   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:17.246156   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:17.246156   14960 command_runner.go:130] > Allocatable:
	I0419 18:59:17.246156   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:17.246156   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:17.246156   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:17.246156   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:17.246156   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:17.246156   14960 command_runner.go:130] > System Info:
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Machine ID:                 ea453a3100b34d789441206109708446
	I0419 18:59:17.246156   14960 command_runner.go:130] >   System UUID:                9f7972f9-8942-ef4f-b0cf-029b405f5832
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Boot ID:                    d8ef37df-1396-47c1-8bea-04667e5bc60b
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Kernel Version:             5.10.207
	I0419 18:59:17.246156   14960 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Operating System:           linux
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Architecture:               amd64
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Container Runtime Version:  docker://26.0.1
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0419 18:59:17.246156   14960 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0419 18:59:17.246156   14960 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0419 18:59:17.246156   14960 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0419 18:59:17.246156   14960 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0419 18:59:17.246156   14960 command_runner.go:130] >   default                     busybox-fc5497c4f-2d5hs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0419 18:59:17.246156   14960 command_runner.go:130] >   kube-system                 kindnet-s98rh              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0419 18:59:17.246156   14960 command_runner.go:130] >   kube-system                 kube-proxy-bjv9b           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0419 18:59:17.246156   14960 command_runner.go:130] > Allocated resources:
	I0419 18:59:17.246156   14960 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Resource           Requests   Limits
	I0419 18:59:17.246156   14960 command_runner.go:130] >   --------           --------   ------
	I0419 18:59:17.246156   14960 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0419 18:59:17.246156   14960 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0419 18:59:17.246156   14960 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0419 18:59:17.246156   14960 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0419 18:59:17.246156   14960 command_runner.go:130] > Events:
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0419 18:59:17.246156   14960 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-348000-m02 status is now: NodeHasSufficientMemory
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-348000-m02 status is now: NodeHasNoDiskPressure
	I0419 18:59:17.246696   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-348000-m02 status is now: NodeHasSufficientPID
	I0419 18:59:17.246696   14960 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-348000-m02 event: Registered Node multinode-348000-m02 in Controller
	I0419 18:59:17.246744   14960 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-348000-m02 status is now: NodeReady
	I0419 18:59:17.246744   14960 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-348000-m02 event: Registered Node multinode-348000-m02 in Controller
	I0419 18:59:17.246744   14960 command_runner.go:130] >   Normal  NodeNotReady             24s                node-controller  Node multinode-348000-m02 status is now: NodeNotReady
	I0419 18:59:17.279540   14960 command_runner.go:130] > Name:               multinode-348000-m03
	I0419 18:59:17.280291   14960 command_runner.go:130] > Roles:              <none>
	I0419 18:59:17.280344   14960 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0419 18:59:17.280344   14960 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0419 18:59:17.280344   14960 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0419 18:59:17.280382   14960 command_runner.go:130] >                     kubernetes.io/hostname=multinode-348000-m03
	I0419 18:59:17.280399   14960 command_runner.go:130] >                     kubernetes.io/os=linux
	I0419 18:59:17.280399   14960 command_runner.go:130] >                     minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	I0419 18:59:17.280399   14960 command_runner.go:130] >                     minikube.k8s.io/name=multinode-348000
	I0419 18:59:17.280399   14960 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0419 18:59:17.280399   14960 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_04_19T18_53_29_0700
	I0419 18:59:17.280399   14960 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0419 18:59:17.280493   14960 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0419 18:59:17.280493   14960 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0419 18:59:17.280538   14960 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0419 18:59:17.280538   14960 command_runner.go:130] > CreationTimestamp:  Sat, 20 Apr 2024 01:53:28 +0000
	I0419 18:59:17.280577   14960 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0419 18:59:17.280577   14960 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0419 18:59:17.280577   14960 command_runner.go:130] > Unschedulable:      false
	I0419 18:59:17.280577   14960 command_runner.go:130] > Lease:
	I0419 18:59:17.280629   14960 command_runner.go:130] >   HolderIdentity:  multinode-348000-m03
	I0419 18:59:17.280629   14960 command_runner.go:130] >   AcquireTime:     <unset>
	I0419 18:59:17.280629   14960 command_runner.go:130] >   RenewTime:       Sat, 20 Apr 2024 01:54:29 +0000
	I0419 18:59:17.280666   14960 command_runner.go:130] > Conditions:
	I0419 18:59:17.280666   14960 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0419 18:59:17.280666   14960 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0419 18:59:17.280712   14960 command_runner.go:130] >   MemoryPressure   Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:17.280731   14960 command_runner.go:130] >   DiskPressure     Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:17.280768   14960 command_runner.go:130] >   PIDPressure      Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:17.280768   14960 command_runner.go:130] >   Ready            Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:17.280813   14960 command_runner.go:130] > Addresses:
	I0419 18:59:17.280830   14960 command_runner.go:130] >   InternalIP:  172.19.37.59
	I0419 18:59:17.280830   14960 command_runner.go:130] >   Hostname:    multinode-348000-m03
	I0419 18:59:17.280830   14960 command_runner.go:130] > Capacity:
	I0419 18:59:17.280865   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:17.280865   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:17.280865   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:17.280865   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:17.280865   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:17.280910   14960 command_runner.go:130] > Allocatable:
	I0419 18:59:17.280928   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:17.280928   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:17.280928   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:17.280928   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:17.280928   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:17.280928   14960 command_runner.go:130] > System Info:
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Machine ID:                 02e45e9bf03f4852a443a43ac6a8538b
	I0419 18:59:17.280928   14960 command_runner.go:130] >   System UUID:                37a43d59-2157-6e44-8d13-6c975ea12fea
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Boot ID:                    404bc64b-d4fc-4c63-a589-8191649bdfaa
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Kernel Version:             5.10.207
	I0419 18:59:17.280928   14960 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Operating System:           linux
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Architecture:               amd64
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Container Runtime Version:  docker://26.0.1
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0419 18:59:17.280928   14960 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0419 18:59:17.280928   14960 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0419 18:59:17.280928   14960 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0419 18:59:17.280928   14960 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0419 18:59:17.280928   14960 command_runner.go:130] >   kube-system                 kindnet-mg8qs       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0419 18:59:17.280928   14960 command_runner.go:130] >   kube-system                 kube-proxy-2jjsq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0419 18:59:17.280928   14960 command_runner.go:130] > Allocated resources:
	I0419 18:59:17.280928   14960 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Resource           Requests   Limits
	I0419 18:59:17.280928   14960 command_runner.go:130] >   --------           --------   ------
	I0419 18:59:17.280928   14960 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0419 18:59:17.280928   14960 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0419 18:59:17.280928   14960 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0419 18:59:17.280928   14960 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0419 18:59:17.280928   14960 command_runner.go:130] > Events:
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0419 18:59:17.280928   14960 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Normal  Starting                 5m45s                  kube-proxy       
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientMemory
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-348000-m03 status is now: NodeHasNoDiskPressure
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientPID
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-348000-m03 status is now: NodeReady
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Normal  Starting                 5m49s                  kubelet          Starting kubelet.
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m49s (x2 over 5m49s)  kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientMemory
	I0419 18:59:17.281468   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m49s (x2 over 5m49s)  kubelet          Node multinode-348000-m03 status is now: NodeHasNoDiskPressure
	I0419 18:59:17.281468   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m49s (x2 over 5m49s)  kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientPID
	I0419 18:59:17.281468   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m49s                  kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:17.281468   14960 command_runner.go:130] >   Normal  RegisteredNode           5m45s                  node-controller  Node multinode-348000-m03 event: Registered Node multinode-348000-m03 in Controller
	I0419 18:59:17.281555   14960 command_runner.go:130] >   Normal  NodeReady                5m41s                  kubelet          Node multinode-348000-m03 status is now: NodeReady
	I0419 18:59:17.281555   14960 command_runner.go:130] >   Normal  NodeNotReady             4m4s                   node-controller  Node multinode-348000-m03 status is now: NodeNotReady
	I0419 18:59:17.281555   14960 command_runner.go:130] >   Normal  RegisteredNode           64s                    node-controller  Node multinode-348000-m03 event: Registered Node multinode-348000-m03 in Controller
	I0419 18:59:17.298002   14960 logs.go:123] Gathering logs for kube-apiserver [bd3aa93bac25] ...
	I0419 18:59:17.298078   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd3aa93bac25"
	I0419 18:59:17.332260   14960 command_runner.go:130] ! I0420 01:57:57.501840       1 options.go:221] external host was not specified, using 172.19.42.24
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:57.505380       1 server.go:148] Version: v1.30.0
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:57.505690       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:58.138487       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:58.138530       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:58.138987       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:58.139098       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:58.139890       1 instance.go:299] Using reconciler: lease
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.078678       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.078889       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.354874       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.355339       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.630985       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.818361       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.834974       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.835019       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.835028       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.835870       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.835981       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.837241       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.838781       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.838919       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.838930       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.841133       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.841240       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.842492       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.842627       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.842640       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.843439       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.843519       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.843649       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.332898   14960 command_runner.go:130] ! I0420 01:57:59.844516       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0419 18:59:17.332898   14960 command_runner.go:130] ! I0420 01:57:59.847031       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0419 18:59:17.332898   14960 command_runner.go:130] ! W0420 01:57:59.847132       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.332971   14960 command_runner.go:130] ! W0420 01:57:59.847143       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:17.332971   14960 command_runner.go:130] ! I0420 01:57:59.847848       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0419 18:59:17.332971   14960 command_runner.go:130] ! W0420 01:57:59.847881       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.332971   14960 command_runner.go:130] ! W0420 01:57:59.847889       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:17.333051   14960 command_runner.go:130] ! I0420 01:57:59.849069       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0419 18:59:17.333051   14960 command_runner.go:130] ! W0420 01:57:59.849173       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0419 18:59:17.333156   14960 command_runner.go:130] ! I0420 01:57:59.851437       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0419 18:59:17.333156   14960 command_runner.go:130] ! W0420 01:57:59.851563       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.333156   14960 command_runner.go:130] ! W0420 01:57:59.851574       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:17.333242   14960 command_runner.go:130] ! I0420 01:57:59.852258       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0419 18:59:17.333242   14960 command_runner.go:130] ! W0420 01:57:59.852357       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.333242   14960 command_runner.go:130] ! W0420 01:57:59.852367       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:17.333314   14960 command_runner.go:130] ! I0420 01:57:59.855318       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0419 18:59:17.333314   14960 command_runner.go:130] ! W0420 01:57:59.855413       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.333314   14960 command_runner.go:130] ! W0420 01:57:59.855499       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:17.333314   14960 command_runner.go:130] ! I0420 01:57:59.857232       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0419 18:59:17.333379   14960 command_runner.go:130] ! I0420 01:57:59.859073       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0419 18:59:17.333379   14960 command_runner.go:130] ! W0420 01:57:59.859177       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0419 18:59:17.333379   14960 command_runner.go:130] ! W0420 01:57:59.859187       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.333379   14960 command_runner.go:130] ! I0420 01:57:59.866540       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0419 18:59:17.333379   14960 command_runner.go:130] ! W0420 01:57:59.866633       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0419 18:59:17.333379   14960 command_runner.go:130] ! W0420 01:57:59.866643       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0419 18:59:17.333499   14960 command_runner.go:130] ! I0420 01:57:59.873672       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0419 18:59:17.333537   14960 command_runner.go:130] ! W0420 01:57:59.873814       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.333537   14960 command_runner.go:130] ! W0420 01:57:59.873827       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:17.333537   14960 command_runner.go:130] ! I0420 01:57:59.875959       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0419 18:59:17.333581   14960 command_runner.go:130] ! W0420 01:57:59.875999       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.333581   14960 command_runner.go:130] ! I0420 01:57:59.909243       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0419 18:59:17.333581   14960 command_runner.go:130] ! W0420 01:57:59.909284       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.333581   14960 command_runner.go:130] ! I0420 01:58:00.597195       1 secure_serving.go:213] Serving securely on [::]:8443
	I0419 18:59:17.333639   14960 command_runner.go:130] ! I0420 01:58:00.597666       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:17.333639   14960 command_runner.go:130] ! I0420 01:58:00.598134       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:17.333639   14960 command_runner.go:130] ! I0420 01:58:00.597703       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0419 18:59:17.333639   14960 command_runner.go:130] ! I0420 01:58:00.597737       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:17.333730   14960 command_runner.go:130] ! I0420 01:58:00.600064       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0419 18:59:17.333730   14960 command_runner.go:130] ! I0420 01:58:00.600948       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0419 18:59:17.333758   14960 command_runner.go:130] ! I0420 01:58:00.601165       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0419 18:59:17.333758   14960 command_runner.go:130] ! I0420 01:58:00.601445       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0419 18:59:17.333795   14960 command_runner.go:130] ! I0420 01:58:00.602539       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0419 18:59:17.333795   14960 command_runner.go:130] ! I0420 01:58:00.602852       1 aggregator.go:163] waiting for initial CRD sync...
	I0419 18:59:17.333795   14960 command_runner.go:130] ! I0420 01:58:00.603187       1 controller.go:78] Starting OpenAPI AggregationController
	I0419 18:59:17.333795   14960 command_runner.go:130] ! I0420 01:58:00.604023       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0419 18:59:17.333851   14960 command_runner.go:130] ! I0420 01:58:00.604384       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0419 18:59:17.333851   14960 command_runner.go:130] ! I0420 01:58:00.606631       1 available_controller.go:423] Starting AvailableConditionController
	I0419 18:59:17.333851   14960 command_runner.go:130] ! I0420 01:58:00.606857       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0419 18:59:17.333851   14960 command_runner.go:130] ! I0420 01:58:00.607138       1 controller.go:116] Starting legacy_token_tracking_controller
	I0419 18:59:17.333944   14960 command_runner.go:130] ! I0420 01:58:00.607178       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0419 18:59:17.333944   14960 command_runner.go:130] ! I0420 01:58:00.607325       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0419 18:59:17.333976   14960 command_runner.go:130] ! I0420 01:58:00.607349       1 controller.go:139] Starting OpenAPI controller
	I0419 18:59:17.333976   14960 command_runner.go:130] ! I0420 01:58:00.607381       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0419 18:59:17.334001   14960 command_runner.go:130] ! I0420 01:58:00.607407       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0419 18:59:17.334001   14960 command_runner.go:130] ! I0420 01:58:00.607409       1 naming_controller.go:291] Starting NamingConditionController
	I0419 18:59:17.334001   14960 command_runner.go:130] ! I0420 01:58:00.607487       1 establishing_controller.go:76] Starting EstablishingController
	I0419 18:59:17.334049   14960 command_runner.go:130] ! I0420 01:58:00.607512       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0419 18:59:17.334049   14960 command_runner.go:130] ! I0420 01:58:00.607530       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0419 18:59:17.334049   14960 command_runner.go:130] ! I0420 01:58:00.607546       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0419 18:59:17.334049   14960 command_runner.go:130] ! I0420 01:58:00.608170       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0419 18:59:17.334105   14960 command_runner.go:130] ! I0420 01:58:00.608198       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0419 18:59:17.334105   14960 command_runner.go:130] ! I0420 01:58:00.608328       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:17.334105   14960 command_runner.go:130] ! I0420 01:58:00.608421       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:17.334105   14960 command_runner.go:130] ! I0420 01:58:00.607383       1 controller.go:87] Starting OpenAPI V3 controller
	I0419 18:59:17.334105   14960 command_runner.go:130] ! I0420 01:58:00.709605       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0419 18:59:17.334197   14960 command_runner.go:130] ! I0420 01:58:00.736531       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0419 18:59:17.334197   14960 command_runner.go:130] ! I0420 01:58:00.737086       1 shared_informer.go:320] Caches are synced for configmaps
	I0419 18:59:17.334197   14960 command_runner.go:130] ! I0420 01:58:00.737192       1 aggregator.go:165] initial CRD sync complete...
	I0419 18:59:17.334241   14960 command_runner.go:130] ! I0420 01:58:00.737219       1 autoregister_controller.go:141] Starting autoregister controller
	I0419 18:59:17.334241   14960 command_runner.go:130] ! I0420 01:58:00.737225       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0419 18:59:17.334241   14960 command_runner.go:130] ! I0420 01:58:00.737230       1 cache.go:39] Caches are synced for autoregister controller
	I0419 18:59:17.334336   14960 command_runner.go:130] ! I0420 01:58:00.740699       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 18:59:17.334364   14960 command_runner.go:130] ! I0420 01:58:00.741004       1 policy_source.go:224] refreshing policies
	I0419 18:59:17.334364   14960 command_runner.go:130] ! I0420 01:58:00.742672       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0419 18:59:17.334364   14960 command_runner.go:130] ! I0420 01:58:00.747054       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0419 18:59:17.334418   14960 command_runner.go:130] ! I0420 01:58:00.805770       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0419 18:59:17.334418   14960 command_runner.go:130] ! I0420 01:58:00.807460       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0419 18:59:17.334441   14960 command_runner.go:130] ! I0420 01:58:00.814456       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0419 18:59:17.334441   14960 command_runner.go:130] ! I0420 01:58:00.814490       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0419 18:59:17.334485   14960 command_runner.go:130] ! I0420 01:58:00.815844       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0419 18:59:17.334485   14960 command_runner.go:130] ! I0420 01:58:01.612010       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0419 18:59:17.334485   14960 command_runner.go:130] ! W0420 01:58:02.160618       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.42.231 172.19.42.24]
	I0419 18:59:17.334543   14960 command_runner.go:130] ! I0420 01:58:02.163332       1 controller.go:615] quota admission added evaluator for: endpoints
	I0419 18:59:17.334567   14960 command_runner.go:130] ! I0420 01:58:02.176968       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0419 18:59:17.334567   14960 command_runner.go:130] ! I0420 01:58:03.430204       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0419 18:59:17.334600   14960 command_runner.go:130] ! I0420 01:58:03.761410       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0419 18:59:17.334600   14960 command_runner.go:130] ! I0420 01:58:03.780335       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0419 18:59:17.334600   14960 command_runner.go:130] ! I0420 01:58:03.907022       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0419 18:59:17.334600   14960 command_runner.go:130] ! I0420 01:58:03.924019       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0419 18:59:17.334600   14960 command_runner.go:130] ! W0420 01:58:22.143512       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.42.24]
	I0419 18:59:17.343646   14960 logs.go:123] Gathering logs for kube-proxy [e438af0f1ec9] ...
	I0419 18:59:17.343646   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e438af0f1ec9"
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.129201       1 server_linux.go:69] "Using iptables proxy"
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.201631       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.42.24"]
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.344058       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.344107       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.344137       1 server_linux.go:165] "Using iptables Proxier"
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.353394       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.354462       1 server.go:872] "Version info" version="v1.30.0"
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.354693       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.358325       1 config.go:192] "Starting service config controller"
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.358366       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.358985       1 config.go:101] "Starting endpoint slice config controller"
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.359176       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.358997       1 config.go:319] "Starting node config controller"
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.368409       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.459372       1 shared_informer.go:320] Caches are synced for service config
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.459745       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.470538       1 shared_informer.go:320] Caches are synced for node config
	I0419 18:59:17.378001   14960 logs.go:123] Gathering logs for coredns [627b84abf45c] ...
	I0419 18:59:17.378001   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627b84abf45c"
	I0419 18:59:17.406877   14960 command_runner.go:130] > .:53
	I0419 18:59:17.407915   14960 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93714cfd58e203ac2baa48ea9c7b435951d2a9faed7a5c70b4e84c89c6c1fe4c1dfa41f14b3ebf0f5941dade673a82eaad960061e673dd78dcb856db3393b39d
	I0419 18:59:17.407915   14960 command_runner.go:130] > CoreDNS-1.11.1
	I0419 18:59:17.407915   14960 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0419 18:59:17.407915   14960 command_runner.go:130] > [INFO] 127.0.0.1:37904 - 37003 "HINFO IN 1336380353163369387.5260466772500757990. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.053891439s
	I0419 18:59:17.407915   14960 command_runner.go:130] > [INFO] 10.244.1.2:47846 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002913s
	I0419 18:59:17.408049   14960 command_runner.go:130] > [INFO] 10.244.1.2:60728 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.118385602s
	I0419 18:59:17.408049   14960 command_runner.go:130] > [INFO] 10.244.1.2:48827 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.043741711s
	I0419 18:59:17.408049   14960 command_runner.go:130] > [INFO] 10.244.1.2:57126 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.111854404s
	I0419 18:59:17.408049   14960 command_runner.go:130] > [INFO] 10.244.0.3:44468 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001971s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:58477 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.002287005s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:39825 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000198301s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:54956 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000604s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:48593 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001261s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:58743 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.027871268s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:44517 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002274s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:35998 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000219501s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:58770 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012982932s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:55456 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174201s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:59031 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001304s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:41687 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000198401s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:46929 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003044s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:35877 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000325701s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:53705 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000318601s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:40560 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164401s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:53239 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001239s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:39754 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001464s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:41397 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001668s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:49126 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001646s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:37850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115501s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:44063 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001443s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:39924 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000607s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:53244 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000622s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:52017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001879s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:55488 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000814s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:57536 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000778s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:45454 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001788s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:52247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001095s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:46954 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001143s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:47574 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098701s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:36658 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000170301s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:35421 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001002s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:41995 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132201s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:36431 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001956s
	I0419 18:59:17.408696   14960 command_runner.go:130] > [INFO] 10.244.0.3:38168 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000222s
	I0419 18:59:17.408696   14960 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0419 18:59:17.408696   14960 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0419 18:59:17.411627   14960 logs.go:123] Gathering logs for kube-scheduler [e476774b8f77] ...
	I0419 18:59:17.411664   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e476774b8f77"
	I0419 18:59:17.438622   14960 command_runner.go:130] ! I0420 01:35:03.474569       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:17.439412   14960 command_runner.go:130] ! W0420 01:35:04.965330       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0419 18:59:17.439487   14960 command_runner.go:130] ! W0420 01:35:04.965379       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:17.439487   14960 command_runner.go:130] ! W0420 01:35:04.965392       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0419 18:59:17.439487   14960 command_runner.go:130] ! W0420 01:35:04.965399       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0419 18:59:17.439546   14960 command_runner.go:130] ! I0420 01:35:05.040739       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0419 18:59:17.439584   14960 command_runner.go:130] ! I0420 01:35:05.040800       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:17.439584   14960 command_runner.go:130] ! I0420 01:35:05.044777       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0419 18:59:17.439584   14960 command_runner.go:130] ! I0420 01:35:05.045192       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 18:59:17.439648   14960 command_runner.go:130] ! I0420 01:35:05.045423       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:17.439648   14960 command_runner.go:130] ! I0420 01:35:05.046180       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:17.439705   14960 command_runner.go:130] ! W0420 01:35:05.063208       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:17.439705   14960 command_runner.go:130] ! E0420 01:35:05.064240       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:17.439766   14960 command_runner.go:130] ! W0420 01:35:05.063609       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.439798   14960 command_runner.go:130] ! E0420 01:35:05.065130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.439857   14960 command_runner.go:130] ! W0420 01:35:05.063676       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:17.439902   14960 command_runner.go:130] ! E0420 01:35:05.065433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:17.439936   14960 command_runner.go:130] ! W0420 01:35:05.063732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:17.439936   14960 command_runner.go:130] ! E0420 01:35:05.065801       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:17.440042   14960 command_runner.go:130] ! W0420 01:35:05.063780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:17.440042   14960 command_runner.go:130] ! E0420 01:35:05.066820       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:17.440096   14960 command_runner.go:130] ! W0420 01:35:05.063927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440136   14960 command_runner.go:130] ! E0420 01:35:05.067122       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440160   14960 command_runner.go:130] ! W0420 01:35:05.063973       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:17.440160   14960 command_runner.go:130] ! E0420 01:35:05.069517       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:17.440219   14960 command_runner.go:130] ! W0420 01:35:05.064025       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:17.440219   14960 command_runner.go:130] ! E0420 01:35:05.069884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:17.440285   14960 command_runner.go:130] ! W0420 01:35:05.064095       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:17.440285   14960 command_runner.go:130] ! E0420 01:35:05.070309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:17.440285   14960 command_runner.go:130] ! W0420 01:35:05.064163       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440368   14960 command_runner.go:130] ! E0420 01:35:05.070884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440432   14960 command_runner.go:130] ! W0420 01:35:05.070236       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:17.440432   14960 command_runner.go:130] ! E0420 01:35:05.071293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:17.440532   14960 command_runner.go:130] ! W0420 01:35:05.070677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:17.440561   14960 command_runner.go:130] ! E0420 01:35:05.072125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:17.440615   14960 command_runner.go:130] ! W0420 01:35:05.070741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:17.440656   14960 command_runner.go:130] ! E0420 01:35:05.073528       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:17.440681   14960 command_runner.go:130] ! W0420 01:35:05.072410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:17.440726   14960 command_runner.go:130] ! E0420 01:35:05.073910       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:17.440726   14960 command_runner.go:130] ! W0420 01:35:05.072540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440786   14960 command_runner.go:130] ! E0420 01:35:05.074332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440786   14960 command_runner.go:130] ! W0420 01:35:05.987809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:05.988072       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.078924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.079045       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.146102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.146225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.213142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.213279       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.278808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.279232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.310265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.311126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.333128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.333531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.355993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.356053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.356154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.356365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.490128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.490240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.496247       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.496709       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.552817       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.552917       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.607496       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.607914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.608255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.608488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.623642       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.624029       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! I0420 01:35:09.746203       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:17.440919   14960 command_runner.go:130] ! I0420 01:55:30.893306       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0419 18:59:17.440919   14960 command_runner.go:130] ! I0420 01:55:30.893359       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0419 18:59:17.440919   14960 command_runner.go:130] ! I0420 01:55:30.893732       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 18:59:17.441969   14960 command_runner.go:130] ! E0420 01:55:30.894682       1 run.go:74] "command failed" err="finished without leader elect"
	I0419 18:59:17.452692   14960 logs.go:123] Gathering logs for kindnet [ae0b21715f86] ...
	I0419 18:59:17.452692   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0b21715f86"
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:36.715209       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:36.715359       1 main.go:107] hostIP = 172.19.42.24
	I0419 18:59:17.488866   14960 command_runner.go:130] ! podIP = 172.19.42.24
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:36.715480       1 main.go:116] setting mtu 1500 for CNI 
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:36.715877       1 main.go:146] kindnetd IP family: "ipv4"
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:36.806023       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:37.413197       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:37.413291       1 main.go:227] handling current node
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:37.413685       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:37.413745       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:37.414005       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.19.32.249 Flags: [] Table: 0} 
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:37.506308       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:37.506405       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:37.506676       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.19.37.59 Flags: [] Table: 0} 
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:47.525508       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:47.525608       1 main.go:227] handling current node
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:47.525629       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:47.525638       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:47.526101       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:47.526135       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:57.538448       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:57.538834       1 main.go:227] handling current node
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:57.538899       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:57.538926       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:57.539176       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:57.539274       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:59:07.555783       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:59:07.555932       1 main.go:227] handling current node
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:59:07.556426       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:59:07.556438       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:59:07.556563       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:59:07.556590       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:17.491823   14960 logs.go:123] Gathering logs for Docker ...
	I0419 18:59:17.491880   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 18:59:17.531874   14960 command_runner.go:130] > Apr 20 01:56:27 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:17.531942   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:17.531991   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:17.532063   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:17.532063   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0419 18:59:17.532063   14960 command_runner.go:130] > Apr 20 01:56:28 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:17.532134   14960 command_runner.go:130] > Apr 20 01:56:28 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:17.532198   14960 command_runner.go:130] > Apr 20 01:56:28 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:17.532223   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0419 18:59:17.532253   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0419 18:59:17.532253   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:17.532253   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:17.532334   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:17.532363   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:17.532363   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0419 18:59:17.532420   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:17.532420   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:17.532420   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:17.532482   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 systemd[1]: Starting Docker Application Container Engine...
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[657]: time="2024-04-20T01:57:18.710176447Z" level=info msg="Starting up"
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[657]: time="2024-04-20T01:57:18.711651787Z" level=info msg="containerd not running, starting managed containerd"
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[657]: time="2024-04-20T01:57:18.716746379Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=664
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.747165139Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778478063Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778645056Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778743452Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778860747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.780842867Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.780950062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781281849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:17.533078   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781381945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.533078   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781405744Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0419 18:59:17.533078   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781418543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.533154   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781890324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.533154   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.782561296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.533154   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786065554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:17.533224   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786174049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.533224   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786324143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:17.533224   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786418639Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0419 18:59:17.533315   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.787110911Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0419 18:59:17.533315   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.787239206Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0419 18:59:17.533315   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.787257405Z" level=info msg="metadata content store policy set" policy=shared
	I0419 18:59:17.533377   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794203322Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0419 18:59:17.533377   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794271219Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0419 18:59:17.533377   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794292218Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0419 18:59:17.533377   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794308818Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0419 18:59:17.533377   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794325217Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0419 18:59:17.533491   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794399514Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0419 18:59:17.533520   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794805397Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0419 18:59:17.533520   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795021089Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0419 18:59:17.533564   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795123284Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0419 18:59:17.533564   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795209281Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0419 18:59:17.533564   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795227280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.533620   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795252079Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.533620   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795270178Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.533682   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795305177Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.533682   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795321176Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.533682   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795336476Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.533748   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795368674Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.533748   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795383074Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.533748   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795405873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.533748   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795423972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.533748   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795438172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.533837   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795453671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.533837   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795468970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.533901   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795483970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.533901   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795576866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.533901   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795594465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.533953   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795610465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.533953   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795628364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.533953   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795642863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.534042   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795657163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.534042   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795671762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.534042   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795713760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0419 18:59:17.534108   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795756259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.534108   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795811856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.534108   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795843255Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0419 18:59:17.534175   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795920052Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0419 18:59:17.534175   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795944151Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0419 18:59:17.534175   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796175542Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0419 18:59:17.534246   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796194141Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0419 18:59:17.534246   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796263238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.534246   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796305336Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0419 18:59:17.534246   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796319336Z" level=info msg="NRI interface is disabled by configuration."
	I0419 18:59:17.534366   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.797416591Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0419 18:59:17.534366   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.797499188Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0419 18:59:17.534366   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.797659381Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0419 18:59:17.534366   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.798178860Z" level=info msg="containerd successfully booted in 0.054054s"
	I0419 18:59:17.534366   14960 command_runner.go:130] > Apr 20 01:57:19 multinode-348000 dockerd[657]: time="2024-04-20T01:57:19.782299514Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0419 18:59:17.534366   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.015692930Z" level=info msg="Loading containers: start."
	I0419 18:59:17.534485   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.458486133Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0419 18:59:17.534485   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.551244732Z" level=info msg="Loading containers: done."
	I0419 18:59:17.534485   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.579065252Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	I0419 18:59:17.534485   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.579904847Z" level=info msg="Daemon has completed initialization"
	I0419 18:59:17.534485   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.637363974Z" level=info msg="API listen on [::]:2376"
	I0419 18:59:17.534485   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 systemd[1]: Started Docker Application Container Engine.
	I0419 18:59:17.534662   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.639403561Z" level=info msg="API listen on /var/run/docker.sock"
	I0419 18:59:17.534662   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.472939019Z" level=info msg="Processing signal 'terminated'"
	I0419 18:59:17.534662   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 systemd[1]: Stopping Docker Application Container Engine...
	I0419 18:59:17.534662   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.475778002Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0419 18:59:17.534662   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.476696029Z" level=info msg="Daemon shutdown complete"
	I0419 18:59:17.534662   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.476992338Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0419 18:59:17.534809   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.477157542Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0419 18:59:17.534809   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 systemd[1]: docker.service: Deactivated successfully.
	I0419 18:59:17.534809   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 systemd[1]: Stopped Docker Application Container Engine.
	I0419 18:59:17.534809   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 systemd[1]: Starting Docker Application Container Engine...
	I0419 18:59:17.534809   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:47.551071055Z" level=info msg="Starting up"
	I0419 18:59:17.534809   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:47.552229889Z" level=info msg="containerd not running, starting managed containerd"
	I0419 18:59:17.534809   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:47.555196776Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1058
	I0419 18:59:17.534931   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.593728507Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0419 18:59:17.534931   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623742487Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0419 18:59:17.534931   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623851391Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0419 18:59:17.534931   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623939793Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0419 18:59:17.534931   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623957394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.534931   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624003795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:17.535068   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624024296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.535068   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624225802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:17.535068   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624329205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.535068   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624352205Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0419 18:59:17.535068   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624363806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.535068   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624391206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.535194   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624622913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.535194   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.627825907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:17.535194   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.627876709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.535302   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628096615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:17.535302   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628227419Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0419 18:59:17.535302   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628259620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0419 18:59:17.535302   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628280321Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0419 18:59:17.535383   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628292621Z" level=info msg="metadata content store policy set" policy=shared
	I0419 18:59:17.535383   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628514127Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0419 18:59:17.535462   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628716033Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0419 18:59:17.535462   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628764035Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0419 18:59:17.535462   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628783935Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0419 18:59:17.535541   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628872138Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0419 18:59:17.535541   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628938240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0419 18:59:17.535541   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.629513057Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0419 18:59:17.535541   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.629754764Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0419 18:59:17.535618   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.629936569Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0419 18:59:17.535618   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630060973Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0419 18:59:17.535618   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630086474Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.535697   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630105074Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.535697   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630122275Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.535697   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630140375Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.535697   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630157976Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.535786   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630174076Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.535786   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630191277Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.535786   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630206077Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.535862   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630234378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.535862   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630252178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.535862   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630267579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.535862   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630283379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.535945   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630298980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.535945   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630314780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.535945   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630328781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.535945   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630360082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.536032   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630377682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.536032   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630410083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.536032   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630423583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.536103   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630455984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.536103   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630487185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.536166   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630505186Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0419 18:59:17.536191   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630528987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630643490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630666391Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630895497Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630922398Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630934798Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630945799Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.631020001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.631067102Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.631083303Z" level=info msg="NRI interface is disabled by configuration."
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632230736Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632319639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632396541Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632594347Z" level=info msg="containerd successfully booted in 0.042627s"
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:48 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:48.604760074Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:48 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:48.637031921Z" level=info msg="Loading containers: start."
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:48 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:48.936729515Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.021589305Z" level=info msg="Loading containers: done."
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.048182786Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.048316590Z" level=info msg="Daemon has completed initialization"
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.095567976Z" level=info msg="API listen on /var/run/docker.sock"
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 systemd[1]: Started Docker Application Container Engine.
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.098304756Z" level=info msg="API listen on [::]:2376"
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:17.536751   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:17.536751   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:17.536751   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0419 18:59:17.536751   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Loaded network plugin cni"
	I0419 18:59:17.536751   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0419 18:59:17.536751   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0419 18:59:17.536751   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0419 18:59:17.536918   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0419 18:59:17.536918   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Start cri-dockerd grpc backend"
	I0419 18:59:17.536945   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0419 18:59:17.536945   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-xnz2k_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"476e3efb38684054cbc21c027cf1ddd3f9ca47bb829786f8636fd877fd4b2f81\""
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-7w477_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"2dd294415aae178d6b9bed0368d49bedc6d0afa8f5b9ad0011c73ffcb2c24b3c\""
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.930297132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.930785146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.930860749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.931659072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002064338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002134840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002149541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002292345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e8baa597c1467ae8c3a1ce9abf0a378ddcffed5a93f7b41dddb4ce4511320dfd/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151299517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151377019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151407720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151504323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169004837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169190142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169211543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169324146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/118cca57d1f547838d0c2442f2945e9daf9b041170bf162489525286bf3d75c2/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7052a6f04def38545970026f2934eb29913066396b26eb86f6675e7c0c685db/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ab9ff1d9068805d6a2ad10084128436e5b1fcaaa8c64f2f1a5e811455f0f99ee/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:17.537586   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441120322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.537586   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441388229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.537586   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441493933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537586   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441783141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537586   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.541538868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.537728   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.541743874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.537787   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.541768275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537815   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.542244089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537847   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.635958239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.537847   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.636305549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.537847   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.636479754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537941   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.636776363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538004   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.703176711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.538049   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.703241613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.538049   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.703253713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538049   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.704949863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538049   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:00Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0419 18:59:17.538131   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.682944236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.538131   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.683066839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.538211   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.683087340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538211   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.683203743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538211   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.775229244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.538287   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.775527153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.538287   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.775671457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538287   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.776004967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538362   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.791300015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.538362   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.791478721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.538362   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.791611925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538439   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.792335946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538439   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/09f65a695303814b61d199dd53caa1efad532c76b04176a404206b865fd6b38a/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:17.538516   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5472c1fba3929b8a427273be545db7fb7df3c0ffbf035e24a1d3b71418b9e031/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:17.538576   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.150688061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.538611   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.150834665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.151084573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.152395011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.341191051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.341388457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.341505460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.342279283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b5a777eba295e3b640d8d8a60aedcc20243d0f4a6fc4d3f3391b06fc6de0247a/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.851490425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.852225247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.852338750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.853459583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1052]: time="2024-04-20T01:58:23.324898945Z" level=info msg="ignoring event" container=f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:23.325982179Z" level=info msg="shim disconnected" id=f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919 namespace=moby
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:23.326071582Z" level=warning msg="cleaning up after shim disconnected" id=f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919 namespace=moby
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:23.326085983Z" level=info msg="cleaning up dead shim" namespace=moby
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1052]: time="2024-04-20T01:58:32.676558128Z" level=info msg="ignoring event" container=45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:32.681127769Z" level=info msg="shim disconnected" id=45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702 namespace=moby
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:32.681255073Z" level=warning msg="cleaning up after shim disconnected" id=45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702 namespace=moby
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:32.681323075Z" level=info msg="cleaning up dead shim" namespace=moby
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356286643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356444648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356547351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356850260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539171   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.371313874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.539171   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.372274603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.539253   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.372497010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539253   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.373020725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539253   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.468874089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.539327   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.469011493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.469033394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.469948221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.577907307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.578194516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.578360121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.578991939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:59:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f28a1e746a9b438367a8e05d2e1a085afb4abec4174f7a7eb80549e02b95047a/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:59:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/75ff9f4e9dde29a997e4321dd3659a2ce7d479a75826a78c4d3525f1eb5f696f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.046055457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.046333943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.046360842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.047301594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.170326341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.170444835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.170467134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.171235195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.539885   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.539885   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.539885   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.539885   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.539885   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.539885   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540038   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:12 multinode-348000 dockerd[1052]: 2024/04/20 01:59:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:16 multinode-348000 dockerd[1052]: 2024/04/20 01:59:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:16 multinode-348000 dockerd[1052]: 2024/04/20 01:59:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:16 multinode-348000 dockerd[1052]: 2024/04/20 01:59:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:16 multinode-348000 dockerd[1052]: 2024/04/20 01:59:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:16 multinode-348000 dockerd[1052]: 2024/04/20 01:59:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540596   14960 command_runner.go:130] > Apr 20 01:59:17 multinode-348000 dockerd[1052]: 2024/04/20 01:59:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540596   14960 command_runner.go:130] > Apr 20 01:59:17 multinode-348000 dockerd[1052]: 2024/04/20 01:59:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540596   14960 command_runner.go:130] > Apr 20 01:59:17 multinode-348000 dockerd[1052]: 2024/04/20 01:59:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540596   14960 command_runner.go:130] > Apr 20 01:59:17 multinode-348000 dockerd[1052]: 2024/04/20 01:59:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540596   14960 command_runner.go:130] > Apr 20 01:59:17 multinode-348000 dockerd[1052]: 2024/04/20 01:59:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.574706   14960 logs.go:123] Gathering logs for kubelet ...
	I0419 18:59:17.574706   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 18:59:17.606833   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: I0420 01:57:51.575772    1390 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: I0420 01:57:51.576306    1390 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: I0420 01:57:51.577194    1390 server.go:927] "Client rotation is on, will bootstrap in background"
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: E0420 01:57:51.579651    1390 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: I0420 01:57:52.300689    1443 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: I0420 01:57:52.301056    1443 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: I0420 01:57:52.301551    1443 server.go:927] "Client rotation is on, will bootstrap in background"
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: E0420 01:57:52.301845    1443 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.955182    1526 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.955367    1526 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.955676    1526 server.go:927] "Client rotation is on, will bootstrap in background"
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.957661    1526 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.971626    1526 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.998144    1526 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.998312    1526 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.999775    1526 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0419 18:59:17.607527   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:54.999948    1526 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-348000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0419 18:59:17.607527   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.000770    1526 topology_manager.go:138] "Creating topology manager with none policy"
	I0419 18:59:17.607686   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.000879    1526 container_manager_linux.go:301] "Creating device plugin manager"
	I0419 18:59:17.607686   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.001855    1526 state_mem.go:36] "Initialized new in-memory state store"
	I0419 18:59:17.607716   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.003861    1526 kubelet.go:400] "Attempting to sync node with API server"
	I0419 18:59:17.607716   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.003952    1526 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0419 18:59:17.607773   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.004045    1526 kubelet.go:312] "Adding apiserver pod source"
	I0419 18:59:17.607773   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.009472    1526 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0419 18:59:17.607773   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.017989    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.607908   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.018091    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.607908   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.019381    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.607908   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.019428    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.607994   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.019619    1526 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.1" apiVersion="v1"
	I0419 18:59:17.607994   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.022328    1526 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0419 18:59:17.608030   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.023051    1526 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0419 18:59:17.608030   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.025680    1526 server.go:1264] "Started kubelet"
	I0419 18:59:17.608030   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.028955    1526 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0419 18:59:17.608105   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.031361    1526 server.go:455] "Adding debug handlers to kubelet server"
	I0419 18:59:17.608105   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.034499    1526 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0419 18:59:17.608173   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.035670    1526 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0419 18:59:17.608232   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.036524    1526 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.19.42.24:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-348000.17c7da5cb9bb1787  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-348000,UID:multinode-348000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-348000,},FirstTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,LastTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-348
000,}"
	I0419 18:59:17.608283   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.053292    1526 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0419 18:59:17.608319   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.062175    1526 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0419 18:59:17.608319   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.067879    1526 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0419 18:59:17.608386   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.097159    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="200ms"
	I0419 18:59:17.608408   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.116285    1526 factory.go:221] Registration of the systemd container factory successfully
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.117073    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.118285    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.117970    1526 reconciler.go:26] "Reconciler: start to sync state"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.118962    1526 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.119576    1526 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.135081    1526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.165861    1526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166700    1526 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166759    1526 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166846    1526 state_mem.go:36] "Initialized new in-memory state store"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166997    1526 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168395    1526 kubelet.go:2337] "Starting kubelet main sync loop"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.168500    1526 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168338    1526 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168585    1526 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168613    1526 policy_none.go:49] "None policy: Start"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.167637    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.171087    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.172453    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.172557    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.187830    1526 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.187946    1526 state_mem.go:35] "Initializing new in-memory state store"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.189368    1526 state_mem.go:75] "Updated machine memory state"
	I0419 18:59:17.608963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.195268    1526 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0419 18:59:17.608963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.195483    1526 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0419 18:59:17.608963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.197626    1526 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0419 18:59:17.608963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.198638    1526 iptables.go:577] "Could not set up iptables canary" err=<
	I0419 18:59:17.609046   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0419 18:59:17.609079   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0419 18:59:17.609079   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0419 18:59:17.609122   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0419 18:59:17.609122   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.201551    1526 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-348000\" not found"
	I0419 18:59:17.609206   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.269451    1526 topology_manager.go:215] "Topology Admit Handler" podUID="30aa2729d0c65b9f89e1ae2d151edd9b" podNamespace="kube-system" podName="kube-controller-manager-multinode-348000"
	I0419 18:59:17.609206   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.271913    1526 topology_manager.go:215] "Topology Admit Handler" podUID="92813b2aed63b63058d3fd06709fa24e" podNamespace="kube-system" podName="kube-scheduler-multinode-348000"
	I0419 18:59:17.609206   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.273779    1526 topology_manager.go:215] "Topology Admit Handler" podUID="af7a3c9321ace7e2a933260472b90113" podNamespace="kube-system" podName="kube-apiserver-multinode-348000"
	I0419 18:59:17.609287   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.275662    1526 topology_manager.go:215] "Topology Admit Handler" podUID="c0cfa3da6a3913c3e67500f6c3e9d72b" podNamespace="kube-system" podName="etcd-multinode-348000"
	I0419 18:59:17.609287   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.281258    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="476e3efb38684054cbc21c027cf1ddd3f9ca47bb829786f8636fd877fd4b2f81"
	I0419 18:59:17.609377   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.281433    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dd294415aae178d6b9bed0368d49bedc6d0afa8f5b9ad0011c73ffcb2c24b3c"
	I0419 18:59:17.609377   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.281454    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5d733991bf1a9e82ffd10768e0652c6c3f983ab24307142345cab3358f068bc"
	I0419 18:59:17.609563   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.297657    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd9e5fae3950c99e6cc71d6166919d407b00212c93827d74e5b83f3896925c0a"
	I0419 18:59:17.609563   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.310354    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="400ms"
	I0419 18:59:17.609643   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.316552    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="187cb57784f4ebcba88e5bf725c118a7d2beec4f543d3864e8f389573f0b11f9"
	I0419 18:59:17.609643   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.332421    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e420625b84be10aa87409a43f4296165b33ed76e82c3ba8a9214abd7177bd38"
	I0419 18:59:17.609719   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.356050    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00d48e11227effb5f0316d58c24e374b4b3f9dcd1b98ac51d6b0038a72d47e42"
	I0419 18:59:17.609719   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.372330    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:17.609795   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.373779    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:17.609795   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.376042    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da1d06ec238f43c7ad43cae75e142a6d15b9c8fb69f88ad8079f167f3f3a6fd4"
	I0419 18:59:17.609795   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.392858    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7935893e9f22a54393d2b3d0a644f7c11a848d5604938074232342a8602e239f"
	I0419 18:59:17.609872   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423082    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-ca-certs\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:17.609872   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423312    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-flexvolume-dir\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:17.609960   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423400    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-k8s-certs\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:17.610043   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423427    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-kubeconfig\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:17.610043   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423456    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af7a3c9321ace7e2a933260472b90113-ca-certs\") pod \"kube-apiserver-multinode-348000\" (UID: \"af7a3c9321ace7e2a933260472b90113\") " pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:17.610109   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423489    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/c0cfa3da6a3913c3e67500f6c3e9d72b-etcd-data\") pod \"etcd-multinode-348000\" (UID: \"c0cfa3da6a3913c3e67500f6c3e9d72b\") " pod="kube-system/etcd-multinode-348000"
	I0419 18:59:17.610194   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423525    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423552    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92813b2aed63b63058d3fd06709fa24e-kubeconfig\") pod \"kube-scheduler-multinode-348000\" (UID: \"92813b2aed63b63058d3fd06709fa24e\") " pod="kube-system/kube-scheduler-multinode-348000"
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423669    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af7a3c9321ace7e2a933260472b90113-k8s-certs\") pod \"kube-apiserver-multinode-348000\" (UID: \"af7a3c9321ace7e2a933260472b90113\") " pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423703    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af7a3c9321ace7e2a933260472b90113-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-348000\" (UID: \"af7a3c9321ace7e2a933260472b90113\") " pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423739    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/c0cfa3da6a3913c3e67500f6c3e9d72b-etcd-certs\") pod \"etcd-multinode-348000\" (UID: \"c0cfa3da6a3913c3e67500f6c3e9d72b\") " pod="kube-system/etcd-multinode-348000"
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.518144    1526 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.19.42.24:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-348000.17c7da5cb9bb1787  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-348000,UID:multinode-348000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-348000,},FirstTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,LastTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-348
000,}"
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.713067    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="800ms"
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.777032    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.778597    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.832721    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.832971    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: W0420 01:57:56.061439    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.063005    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: W0420 01:57:56.073517    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.073647    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.610749   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: W0420 01:57:56.303763    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.610749   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.303918    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.610749   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.515345    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="1.6s"
	I0419 18:59:17.610749   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: I0420 01:57:56.583532    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:17.610749   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.584646    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:17.610908   14960 command_runner.go:130] > Apr 20 01:57:58 multinode-348000 kubelet[1526]: I0420 01:57:58.185924    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:17.610908   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.850138    1526 kubelet_node_status.go:112] "Node was previously registered" node="multinode-348000"
	I0419 18:59:17.610908   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.850459    1526 kubelet_node_status.go:76] "Successfully registered node" node="multinode-348000"
	I0419 18:59:17.610908   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.852895    1526 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0419 18:59:17.610908   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.854574    1526 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0419 18:59:17.611069   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.855598    1526 setters.go:580] "Node became not ready" node="multinode-348000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-04-20T01:58:00Z","lastTransitionTime":"2024-04-20T01:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.022496    1526 apiserver.go:52] "Watching apiserver"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.028549    1526 topology_manager.go:215] "Topology Admit Handler" podUID="274342c4-c21f-4279-b0ea-743d8e2c1463" podNamespace="kube-system" podName="kube-proxy-kj76x"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.028950    1526 topology_manager.go:215] "Topology Admit Handler" podUID="46c91d5e-edfa-4254-a802-148047caeab5" podNamespace="kube-system" podName="kindnet-s4fsr"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.029150    1526 topology_manager.go:215] "Topology Admit Handler" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7w477"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.029359    1526 topology_manager.go:215] "Topology Admit Handler" podUID="ffa0cfb9-91fb-4d5b-abe7-11992c731b74" podNamespace="kube-system" podName="storage-provisioner"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.029596    1526 topology_manager.go:215] "Topology Admit Handler" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916" podNamespace="default" podName="busybox-fc5497c4f-xnz2k"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.030004    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.030339    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-348000" podUID="af4afa87-c484-4b73-9a4d-e86ddcd90049"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.031127    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-348000" podUID="18f5e677-6a96-47ee-9f61-60ab9445eb92"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.036486    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.078433    1526 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-348000"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.080072    1526 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.080948    1526 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.155980    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/274342c4-c21f-4279-b0ea-743d8e2c1463-xtables-lock\") pod \"kube-proxy-kj76x\" (UID: \"274342c4-c21f-4279-b0ea-743d8e2c1463\") " pod="kube-system/kube-proxy-kj76x"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.156217    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/274342c4-c21f-4279-b0ea-743d8e2c1463-lib-modules\") pod \"kube-proxy-kj76x\" (UID: \"274342c4-c21f-4279-b0ea-743d8e2c1463\") " pod="kube-system/kube-proxy-kj76x"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157104    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/46c91d5e-edfa-4254-a802-148047caeab5-cni-cfg\") pod \"kindnet-s4fsr\" (UID: \"46c91d5e-edfa-4254-a802-148047caeab5\") " pod="kube-system/kindnet-s4fsr"
	I0419 18:59:17.611703   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157248    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46c91d5e-edfa-4254-a802-148047caeab5-xtables-lock\") pod \"kindnet-s4fsr\" (UID: \"46c91d5e-edfa-4254-a802-148047caeab5\") " pod="kube-system/kindnet-s4fsr"
	I0419 18:59:17.611873   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.157178    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:17.611873   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.157539    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:01.657504317 +0000 UTC m=+6.817666984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:17.611966   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157392    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ffa0cfb9-91fb-4d5b-abe7-11992c731b74-tmp\") pod \"storage-provisioner\" (UID: \"ffa0cfb9-91fb-4d5b-abe7-11992c731b74\") " pod="kube-system/storage-provisioner"
	I0419 18:59:17.611966   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157844    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46c91d5e-edfa-4254-a802-148047caeab5-lib-modules\") pod \"kindnet-s4fsr\" (UID: \"46c91d5e-edfa-4254-a802-148047caeab5\") " pod="kube-system/kindnet-s4fsr"
	I0419 18:59:17.612051   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.176143    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89aa15d5f8e328791151d96100a36918" path="/var/lib/kubelet/pods/89aa15d5f8e328791151d96100a36918/volumes"
	I0419 18:59:17.612079   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.179130    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fef0b92f87f018a58c19217fdf5d6e1" path="/var/lib/kubelet/pods/8fef0b92f87f018a58c19217fdf5d6e1/volumes"
	I0419 18:59:17.612115   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.206903    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612150   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.207139    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.207264    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:01.707244177 +0000 UTC m=+6.867406744 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.241569    1526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-348000" podStartSLOduration=0.241545984 podStartE2EDuration="241.545984ms" podCreationTimestamp="2024-04-20 01:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-20 01:58:01.218870918 +0000 UTC m=+6.379033485" watchObservedRunningTime="2024-04-20 01:58:01.241545984 +0000 UTC m=+6.401708551"
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.287607    1526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-348000" podStartSLOduration=0.287584435 podStartE2EDuration="287.584435ms" podCreationTimestamp="2024-04-20 01:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-20 01:58:01.265671392 +0000 UTC m=+6.425834059" watchObservedRunningTime="2024-04-20 01:58:01.287584435 +0000 UTC m=+6.447747102"
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.663973    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.664077    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:02.664058382 +0000 UTC m=+7.824220949 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.764474    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.764518    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.764584    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:02.764566131 +0000 UTC m=+7.924728698 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: I0420 01:58:02.563904    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5a777eba295e3b640d8d8a60aedcc20243d0f4a6fc4d3f3391b06fc6de0247a"
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.564077    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: I0420 01:58:02.565075    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-348000" podUID="af4afa87-c484-4b73-9a4d-e86ddcd90049"
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.679358    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.679588    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:04.67956768 +0000 UTC m=+9.839730247 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.789713    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.791860    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.792206    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:04.792183185 +0000 UTC m=+9.952345752 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612773   14960 command_runner.go:130] > Apr 20 01:58:03 multinode-348000 kubelet[1526]: E0420 01:58:03.170851    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.612818   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.169519    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.612818   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.700421    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.700676    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:08.700644486 +0000 UTC m=+13.860807053 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.801637    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.801751    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.801874    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:08.801835856 +0000 UTC m=+13.961998423 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:05 multinode-348000 kubelet[1526]: E0420 01:58:05.169947    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:06 multinode-348000 kubelet[1526]: E0420 01:58:06.169499    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:07 multinode-348000 kubelet[1526]: E0420 01:58:07.170147    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.169208    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.751778    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.752347    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:16.752328447 +0000 UTC m=+21.912491114 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.852291    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.852347    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.852455    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:16.852435774 +0000 UTC m=+22.012598341 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.613486   14960 command_runner.go:130] > Apr 20 01:58:09 multinode-348000 kubelet[1526]: E0420 01:58:09.169017    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.613536   14960 command_runner.go:130] > Apr 20 01:58:10 multinode-348000 kubelet[1526]: E0420 01:58:10.169399    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.613536   14960 command_runner.go:130] > Apr 20 01:58:11 multinode-348000 kubelet[1526]: E0420 01:58:11.169467    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.613638   14960 command_runner.go:130] > Apr 20 01:58:12 multinode-348000 kubelet[1526]: E0420 01:58:12.169441    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.613659   14960 command_runner.go:130] > Apr 20 01:58:13 multinode-348000 kubelet[1526]: E0420 01:58:13.169983    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.613741   14960 command_runner.go:130] > Apr 20 01:58:14 multinode-348000 kubelet[1526]: E0420 01:58:14.169635    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.613788   14960 command_runner.go:130] > Apr 20 01:58:15 multinode-348000 kubelet[1526]: E0420 01:58:15.169488    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.613820   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.169756    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.613886   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.835157    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.835299    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:32.835279204 +0000 UTC m=+37.995441771 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.936116    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.936169    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.936232    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:32.936212581 +0000 UTC m=+38.096375148 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:17 multinode-348000 kubelet[1526]: E0420 01:58:17.169160    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:18 multinode-348000 kubelet[1526]: E0420 01:58:18.171760    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:19 multinode-348000 kubelet[1526]: E0420 01:58:19.169723    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:20 multinode-348000 kubelet[1526]: E0420 01:58:20.169542    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:21 multinode-348000 kubelet[1526]: E0420 01:58:21.169675    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:22 multinode-348000 kubelet[1526]: E0420 01:58:22.169364    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: E0420 01:58:23.169569    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.614447   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: I0420 01:58:23.960680    1526 scope.go:117] "RemoveContainer" containerID="8a37c65d06fabf8d836ffb9a511bb6df5b549fa37051ef79f1f839076af60512"
	I0419 18:59:17.614447   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: I0420 01:58:23.961154    1526 scope.go:117] "RemoveContainer" containerID="f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919"
	I0419 18:59:17.614506   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: E0420 01:58:23.961603    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kindnet-cni pod=kindnet-s4fsr_kube-system(46c91d5e-edfa-4254-a802-148047caeab5)\"" pod="kube-system/kindnet-s4fsr" podUID="46c91d5e-edfa-4254-a802-148047caeab5"
	I0419 18:59:17.614506   14960 command_runner.go:130] > Apr 20 01:58:24 multinode-348000 kubelet[1526]: E0420 01:58:24.169608    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.614606   14960 command_runner.go:130] > Apr 20 01:58:25 multinode-348000 kubelet[1526]: E0420 01:58:25.169976    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.614606   14960 command_runner.go:130] > Apr 20 01:58:26 multinode-348000 kubelet[1526]: E0420 01:58:26.169734    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.614667   14960 command_runner.go:130] > Apr 20 01:58:27 multinode-348000 kubelet[1526]: E0420 01:58:27.170054    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.614667   14960 command_runner.go:130] > Apr 20 01:58:28 multinode-348000 kubelet[1526]: E0420 01:58:28.169260    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.614667   14960 command_runner.go:130] > Apr 20 01:58:29 multinode-348000 kubelet[1526]: E0420 01:58:29.169306    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.614667   14960 command_runner.go:130] > Apr 20 01:58:30 multinode-348000 kubelet[1526]: E0420 01:58:30.169857    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.614667   14960 command_runner.go:130] > Apr 20 01:58:31 multinode-348000 kubelet[1526]: E0420 01:58:31.169543    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.614667   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.169556    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.614667   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.891318    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:17.614667   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.891496    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:59:04.891477649 +0000 UTC m=+70.051640216 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:17.615273   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.992269    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.615273   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.992577    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.615561   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.992723    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:59:04.992688767 +0000 UTC m=+70.152851434 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.615631   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: I0420 01:58:33.115355    1526 scope.go:117] "RemoveContainer" containerID="e248c230a4aa379bf469f41a95d1ea2033316d322a10b6da0ae06f656334b936"
	I0419 18:59:17.615653   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: I0420 01:58:33.115897    1526 scope.go:117] "RemoveContainer" containerID="45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702"
	I0419 18:59:17.615695   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: E0420 01:58:33.116183    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ffa0cfb9-91fb-4d5b-abe7-11992c731b74)\"" pod="kube-system/storage-provisioner" podUID="ffa0cfb9-91fb-4d5b-abe7-11992c731b74"
	I0419 18:59:17.615734   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: E0420 01:58:33.169303    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.615781   14960 command_runner.go:130] > Apr 20 01:58:34 multinode-348000 kubelet[1526]: E0420 01:58:34.169175    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:35 multinode-348000 kubelet[1526]: E0420 01:58:35.169508    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 kubelet[1526]: E0420 01:58:36.169960    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 kubelet[1526]: I0420 01:58:36.170769    1526 scope.go:117] "RemoveContainer" containerID="f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:37 multinode-348000 kubelet[1526]: E0420 01:58:37.171433    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:38 multinode-348000 kubelet[1526]: E0420 01:58:38.169747    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:39 multinode-348000 kubelet[1526]: E0420 01:58:39.169252    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:40 multinode-348000 kubelet[1526]: E0420 01:58:40.169368    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:40 multinode-348000 kubelet[1526]: I0420 01:58:40.269590    1526 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 kubelet[1526]: I0420 01:58:45.169759    1526 scope.go:117] "RemoveContainer" containerID="45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]: I0420 01:58:55.162183    1526 scope.go:117] "RemoveContainer" containerID="490377504e57c3189163833390967e79bb80d222691d4402677feb6f25ed22f4"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]: I0420 01:58:55.206283    1526 scope.go:117] "RemoveContainer" containerID="53f6a00490766be2eb687e6fff052ca7a46ae16a0baf4551e956c81550d673b2"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]: E0420 01:58:55.212558    1526 iptables.go:577] "Could not set up iptables canary" err=<
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 kubelet[1526]: I0420 01:59:05.918992    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75ff9f4e9dde29a997e4321dd3659a2ce7d479a75826a78c4d3525f1eb5f696f"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 kubelet[1526]: I0420 01:59:05.948376    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f28a1e746a9b438367a8e05d2e1a085afb4abec4174f7a7eb80549e02b95047a"
	I0419 18:59:17.662136   14960 logs.go:123] Gathering logs for etcd [2deabe4dbdf4] ...
	I0419 18:59:17.663139   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2deabe4dbdf4"
	I0419 18:59:17.696737   14960 command_runner.go:130] ! {"level":"warn","ts":"2024-04-20T01:57:57.046906Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0419 18:59:17.696981   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.051203Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.19.42.24:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.19.42.24:2380","--initial-cluster=multinode-348000=https://172.19.42.24:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.19.42.24:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.19.42.24:2380","--name=multinode-348000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-ref
resh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0419 18:59:17.696981   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.05132Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0419 18:59:17.696981   14960 command_runner.go:130] ! {"level":"warn","ts":"2024-04-20T01:57:57.053068Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0419 18:59:17.696981   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.053085Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.19.42.24:2380"]}
	I0419 18:59:17.696981   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.053402Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0419 18:59:17.697091   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.06821Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.19.42.24:2379"]}
	I0419 18:59:17.697141   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.071769Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-348000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.19.42.24:2380"],"listen-peer-urls":["https://172.19.42.24:2380"],"advertise-client-urls":["https://172.19.42.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.42.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cl
uster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0419 18:59:17.697240   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.117145Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"37.959314ms"}
	I0419 18:59:17.697240   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.163657Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0419 18:59:17.697240   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186114Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","commit-index":1996}
	I0419 18:59:17.697240   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c switched to configuration voters=()"}
	I0419 18:59:17.697337   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became follower at term 2"}
	I0419 18:59:17.697337   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 4fba18389b33806c [peers: [], term: 2, commit: 1996, applied: 0, lastindex: 1996, lastterm: 2]"}
	I0419 18:59:17.697337   14960 command_runner.go:130] ! {"level":"warn","ts":"2024-04-20T01:57:57.204366Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0419 18:59:17.697395   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.210889Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1364}
	I0419 18:59:17.697395   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.22333Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1726}
	I0419 18:59:17.697395   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.233905Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0419 18:59:17.697464   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.247902Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"4fba18389b33806c","timeout":"7s"}
	I0419 18:59:17.697482   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.252957Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"4fba18389b33806c"}
	I0419 18:59:17.697507   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.253239Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"4fba18389b33806c","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0419 18:59:17.697507   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.257675Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0419 18:59:17.697580   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.259962Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0419 18:59:17.697580   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.260237Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0419 18:59:17.697636   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.26046Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0419 18:59:17.697674   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c switched to configuration voters=(5744930906065567852)"}
	I0419 18:59:17.697732   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264281Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","added-peer-id":"4fba18389b33806c","added-peer-peer-urls":["https://172.19.42.231:2380"]}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264439Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","cluster-version":"3.5"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264612Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.271976Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.273753Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4fba18389b33806c","initial-advertise-peer-urls":["https://172.19.42.24:2380"],"listen-peer-urls":["https://172.19.42.24:2380"],"advertise-client-urls":["https://172.19.42.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.42.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.27526Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.27622Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.42.24:2380"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.277207Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.42.24:2380"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c is starting a new election at term 2"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became pre-candidate at term 2"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c received MsgPreVoteResp from 4fba18389b33806c at term 2"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became candidate at term 3"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c received MsgVoteResp from 4fba18389b33806c at term 3"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became leader at term 3"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4fba18389b33806c elected leader 4fba18389b33806c at term 3"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.994477Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4fba18389b33806c","local-member-attributes":"{Name:multinode-348000 ClientURLs:[https://172.19.42.24:2379]}","request-path":"/0/members/4fba18389b33806c/attributes","cluster-id":"dca2ede42d67bc1c","publish-timeout":"7s"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.994493Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.994512Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.996572Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.996617Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.999043Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.42.24:2379"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.999341Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0419 18:59:17.706971   14960 logs.go:123] Gathering logs for coredns [352cf21a3e20] ...
	I0419 18:59:17.706971   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 352cf21a3e20"
	I0419 18:59:17.736117   14960 command_runner.go:130] > .:53
	I0419 18:59:17.736117   14960 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93714cfd58e203ac2baa48ea9c7b435951d2a9faed7a5c70b4e84c89c6c1fe4c1dfa41f14b3ebf0f5941dade673a82eaad960061e673dd78dcb856db3393b39d
	I0419 18:59:17.736117   14960 command_runner.go:130] > CoreDNS-1.11.1
	I0419 18:59:17.736117   14960 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0419 18:59:17.736117   14960 command_runner.go:130] > [INFO] 127.0.0.1:51206 - 14298 "HINFO IN 4972057462503628469.2167329557243878603. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028297062s
	I0419 18:59:20.236754   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods
	I0419 18:59:20.236754   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:20.236754   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:20.236754   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:20.244481   14960 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 18:59:20.244481   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:20.244481   14960 round_trippers.go:580]     Audit-Id: cfc1e882-a2ad-48e3-81e8-5eb5b902c307
	I0419 18:59:20.244481   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:20.244481   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:20.244481   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:20.244481   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:20.244481   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:20 GMT
	I0419 18:59:20.246285   14960 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1957"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1944","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86494 chars]
	I0419 18:59:20.250417   14960 system_pods.go:59] 12 kube-system pods found
	I0419 18:59:20.250417   14960 system_pods.go:61] "coredns-7db6d8ff4d-7w477" [895ddde9-466d-4abf-b6f4-594847b26c6c] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "etcd-multinode-348000" [33702588-cdf3-4577-b18d-18415cca2c25] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "kindnet-mg8qs" [c6e448a2-6f0c-4c7f-aa8b-0d585c84b09e] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "kindnet-s4fsr" [46c91d5e-edfa-4254-a802-148047caeab5] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "kindnet-s98rh" [551f5bde-7c56-4023-ad92-a2d7a122da60] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "kube-apiserver-multinode-348000" [13adbf1b-6c17-47a9-951d-2481680a47bd] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "kube-controller-manager-multinode-348000" [299bb088-9795-4452-87a8-5e96bcacedde] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "kube-proxy-2jjsq" [f9666ab7-0d1f-4800-b979-6e38fecdc518] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "kube-proxy-bjv9b" [3e909d14-543a-4734-8c17-7e2b8188553d] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "kube-proxy-kj76x" [274342c4-c21f-4279-b0ea-743d8e2c1463] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "kube-scheduler-multinode-348000" [000cfafe-a513-4738-9de2-3c25244b72be] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "storage-provisioner" [ffa0cfb9-91fb-4d5b-abe7-11992c731b74] Running
	I0419 18:59:20.250972   14960 system_pods.go:74] duration metric: took 3.824064s to wait for pod list to return data ...
	I0419 18:59:20.251074   14960 default_sa.go:34] waiting for default service account to be created ...
	I0419 18:59:20.251074   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/default/serviceaccounts
	I0419 18:59:20.251074   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:20.251074   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:20.251074   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:20.255435   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:20.255840   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:20.255840   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:20.255840   14960 round_trippers.go:580]     Content-Length: 262
	I0419 18:59:20.255840   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:20 GMT
	I0419 18:59:20.255840   14960 round_trippers.go:580]     Audit-Id: 7b5244ac-421e-4c65-90cc-38ccffaafc57
	I0419 18:59:20.255840   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:20.255840   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:20.255840   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:20.255840   14960 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1957"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"fd56f1e7-7816-4124-aeed-e48a3ea6b7a7","resourceVersion":"301","creationTimestamp":"2024-04-20T01:35:22Z"}}]}
	I0419 18:59:20.255840   14960 default_sa.go:45] found service account: "default"
	I0419 18:59:20.255840   14960 default_sa.go:55] duration metric: took 4.7668ms for default service account to be created ...
	I0419 18:59:20.255840   14960 system_pods.go:116] waiting for k8s-apps to be running ...
	I0419 18:59:20.255840   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods
	I0419 18:59:20.256425   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:20.256425   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:20.256425   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:20.261095   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:20.261095   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:20.261095   14960 round_trippers.go:580]     Audit-Id: d36db68a-0854-4d15-92ee-0523cdca6651
	I0419 18:59:20.261095   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:20.261624   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:20.261624   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:20.261624   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:20.261624   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:20 GMT
	I0419 18:59:20.263006   14960 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1957"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1944","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86494 chars]
	I0419 18:59:20.267133   14960 system_pods.go:86] 12 kube-system pods found
	I0419 18:59:20.267201   14960 system_pods.go:89] "coredns-7db6d8ff4d-7w477" [895ddde9-466d-4abf-b6f4-594847b26c6c] Running
	I0419 18:59:20.267201   14960 system_pods.go:89] "etcd-multinode-348000" [33702588-cdf3-4577-b18d-18415cca2c25] Running
	I0419 18:59:20.267201   14960 system_pods.go:89] "kindnet-mg8qs" [c6e448a2-6f0c-4c7f-aa8b-0d585c84b09e] Running
	I0419 18:59:20.267201   14960 system_pods.go:89] "kindnet-s4fsr" [46c91d5e-edfa-4254-a802-148047caeab5] Running
	I0419 18:59:20.267249   14960 system_pods.go:89] "kindnet-s98rh" [551f5bde-7c56-4023-ad92-a2d7a122da60] Running
	I0419 18:59:20.267249   14960 system_pods.go:89] "kube-apiserver-multinode-348000" [13adbf1b-6c17-47a9-951d-2481680a47bd] Running
	I0419 18:59:20.267249   14960 system_pods.go:89] "kube-controller-manager-multinode-348000" [299bb088-9795-4452-87a8-5e96bcacedde] Running
	I0419 18:59:20.267249   14960 system_pods.go:89] "kube-proxy-2jjsq" [f9666ab7-0d1f-4800-b979-6e38fecdc518] Running
	I0419 18:59:20.267308   14960 system_pods.go:89] "kube-proxy-bjv9b" [3e909d14-543a-4734-8c17-7e2b8188553d] Running
	I0419 18:59:20.267308   14960 system_pods.go:89] "kube-proxy-kj76x" [274342c4-c21f-4279-b0ea-743d8e2c1463] Running
	I0419 18:59:20.267308   14960 system_pods.go:89] "kube-scheduler-multinode-348000" [000cfafe-a513-4738-9de2-3c25244b72be] Running
	I0419 18:59:20.267308   14960 system_pods.go:89] "storage-provisioner" [ffa0cfb9-91fb-4d5b-abe7-11992c731b74] Running
	I0419 18:59:20.267308   14960 system_pods.go:126] duration metric: took 11.4671ms to wait for k8s-apps to be running ...
	I0419 18:59:20.267390   14960 system_svc.go:44] waiting for kubelet service to be running ....
	I0419 18:59:20.280549   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 18:59:20.308081   14960 system_svc.go:56] duration metric: took 40.5956ms WaitForService to wait for kubelet
	I0419 18:59:20.308143   14960 kubeadm.go:576] duration metric: took 1m14.7232798s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 18:59:20.308200   14960 node_conditions.go:102] verifying NodePressure condition ...
	I0419 18:59:20.308262   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes
	I0419 18:59:20.308262   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:20.308262   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:20.308262   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:20.313673   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:59:20.313673   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:20.313749   14960 round_trippers.go:580]     Audit-Id: c51e48ad-320b-427f-b68d-48c98d19d4b5
	I0419 18:59:20.313749   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:20.313749   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:20.313749   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:20.313749   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:20.313749   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:20 GMT
	I0419 18:59:20.314301   14960 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1957"},"items":[{"metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16255 chars]
	I0419 18:59:20.315722   14960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 18:59:20.315849   14960 node_conditions.go:123] node cpu capacity is 2
	I0419 18:59:20.315902   14960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 18:59:20.315902   14960 node_conditions.go:123] node cpu capacity is 2
	I0419 18:59:20.315902   14960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 18:59:20.315902   14960 node_conditions.go:123] node cpu capacity is 2
	I0419 18:59:20.315902   14960 node_conditions.go:105] duration metric: took 7.7018ms to run NodePressure ...
	I0419 18:59:20.315977   14960 start.go:240] waiting for startup goroutines ...
	I0419 18:59:20.315977   14960 start.go:245] waiting for cluster config update ...
	I0419 18:59:20.316020   14960 start.go:254] writing updated cluster config ...
	I0419 18:59:20.321504   14960 out.go:177] 
	I0419 18:59:20.324144   14960 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 18:59:20.334295   14960 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 18:59:20.334527   14960 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\config.json ...
	I0419 18:59:20.340312   14960 out.go:177] * Starting "multinode-348000-m02" worker node in "multinode-348000" cluster
	I0419 18:59:20.343001   14960 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 18:59:20.343001   14960 cache.go:56] Caching tarball of preloaded images
	I0419 18:59:20.343799   14960 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0419 18:59:20.343799   14960 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 18:59:20.344338   14960 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\config.json ...
	I0419 18:59:20.346950   14960 start.go:360] acquireMachinesLock for multinode-348000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 18:59:20.347102   14960 start.go:364] duration metric: took 76µs to acquireMachinesLock for "multinode-348000-m02"
	I0419 18:59:20.347328   14960 start.go:96] Skipping create...Using existing machine configuration
	I0419 18:59:20.347328   14960 fix.go:54] fixHost starting: m02
	I0419 18:59:20.347486   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:59:22.482592   14960 main.go:141] libmachine: [stdout =====>] : Off
	
	I0419 18:59:22.482592   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:22.482592   14960 fix.go:112] recreateIfNeeded on multinode-348000-m02: state=Stopped err=<nil>
	W0419 18:59:22.482592   14960 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 18:59:22.485353   14960 out.go:177] * Restarting existing hyperv VM for "multinode-348000-m02" ...
	I0419 18:59:22.488699   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-348000-m02
	I0419 18:59:25.551046   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:59:25.551046   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:25.551118   14960 main.go:141] libmachine: Waiting for host to start...
	I0419 18:59:25.551118   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:59:27.746071   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:59:27.746071   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:27.746319   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:59:30.267148   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:59:30.267323   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:31.281397   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:59:33.448302   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:59:33.448302   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:33.448302   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:59:35.954324   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:59:35.954718   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:36.969477   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:59:39.101528   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:59:39.101528   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:39.101528   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:59:41.601589   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:59:41.601589   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:42.602907   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:59:44.806448   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:59:44.806928   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:44.807070   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:59:47.357106   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:59:47.358115   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:48.359673   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:59:50.574810   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:59:50.574810   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:50.575478   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:59:53.157141   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 18:59:53.157141   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:53.157141   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:59:55.315053   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:59:55.315053   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:55.316120   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:59:57.899958   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 18:59:57.900459   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:57.900824   14960 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\config.json ...
	I0419 18:59:57.903342   14960 machine.go:94] provisionDockerMachine start ...
	I0419 18:59:57.903418   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:00.053036   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:00.054023   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:00.054099   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:02.665325   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:02.665325   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:02.671525   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 19:00:02.672246   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.34 22 <nil> <nil>}
	I0419 19:00:02.672246   14960 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 19:00:02.812690   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0419 19:00:02.813294   14960 buildroot.go:166] provisioning hostname "multinode-348000-m02"
	I0419 19:00:02.813294   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:04.968843   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:04.968843   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:04.969325   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:07.568901   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:07.568901   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:07.577137   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 19:00:07.577926   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.34 22 <nil> <nil>}
	I0419 19:00:07.577926   14960 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-348000-m02 && echo "multinode-348000-m02" | sudo tee /etc/hostname
	I0419 19:00:07.742489   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-348000-m02
	
	I0419 19:00:07.742618   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:09.863375   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:09.863375   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:09.863375   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:12.478404   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:12.478404   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:12.485486   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 19:00:12.485645   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.34 22 <nil> <nil>}
	I0419 19:00:12.485645   14960 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-348000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-348000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-348000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 19:00:12.646037   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 19:00:12.646037   14960 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0419 19:00:12.646037   14960 buildroot.go:174] setting up certificates
	I0419 19:00:12.646037   14960 provision.go:84] configureAuth start
	I0419 19:00:12.646037   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:14.793172   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:14.793172   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:14.794080   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:17.365754   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:17.365985   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:17.365985   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:19.463864   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:19.463864   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:19.463864   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:22.073382   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:22.073475   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:22.073475   14960 provision.go:143] copyHostCerts
	I0419 19:00:22.073756   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0419 19:00:22.074106   14960 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0419 19:00:22.074106   14960 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0419 19:00:22.074589   14960 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0419 19:00:22.075933   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0419 19:00:22.076189   14960 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0419 19:00:22.076318   14960 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0419 19:00:22.076741   14960 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0419 19:00:22.077797   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0419 19:00:22.078190   14960 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0419 19:00:22.078190   14960 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0419 19:00:22.078569   14960 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0419 19:00:22.079605   14960 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-348000-m02 san=[127.0.0.1 172.19.47.34 localhost minikube multinode-348000-m02]
	I0419 19:00:22.251286   14960 provision.go:177] copyRemoteCerts
	I0419 19:00:22.267070   14960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 19:00:22.267070   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:24.361051   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:24.361051   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:24.361575   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:26.924432   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:26.924683   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:26.924813   14960 sshutil.go:53] new ssh client: &{IP:172.19.47.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\id_rsa Username:docker}
	I0419 19:00:27.029393   14960 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7622522s)
	I0419 19:00:27.029451   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0419 19:00:27.030087   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0419 19:00:27.080733   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0419 19:00:27.080931   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0419 19:00:27.128736   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0419 19:00:27.129594   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 19:00:27.182369   14960 provision.go:87] duration metric: took 14.5362365s to configureAuth
	I0419 19:00:27.182514   14960 buildroot.go:189] setting minikube options for container-runtime
	I0419 19:00:27.183524   14960 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 19:00:27.183693   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:29.286933   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:29.287758   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:29.287897   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:31.811437   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:31.811437   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:31.820895   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 19:00:31.821699   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.34 22 <nil> <nil>}
	I0419 19:00:31.821699   14960 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0419 19:00:31.968296   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0419 19:00:31.968296   14960 buildroot.go:70] root file system type: tmpfs
	I0419 19:00:31.968830   14960 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0419 19:00:31.968830   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:34.075654   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:34.075654   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:34.075957   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:36.589896   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:36.590132   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:36.596357   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 19:00:36.596357   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.34 22 <nil> <nil>}
	I0419 19:00:36.596357   14960 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.42.24"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0419 19:00:36.761782   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.42.24
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0419 19:00:36.761928   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:38.810210   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:38.811103   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:38.811218   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:41.347653   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:41.348654   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:41.354513   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 19:00:41.354513   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.34 22 <nil> <nil>}
	I0419 19:00:41.355041   14960 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0419 19:00:43.742202   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0419 19:00:43.742202   14960 machine.go:97] duration metric: took 45.8387639s to provisionDockerMachine
	I0419 19:00:43.742202   14960 start.go:293] postStartSetup for "multinode-348000-m02" (driver="hyperv")
	I0419 19:00:43.742202   14960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 19:00:43.756195   14960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 19:00:43.756195   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:45.829676   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:45.830233   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:45.830330   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:48.407654   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:48.407978   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:48.408181   14960 sshutil.go:53] new ssh client: &{IP:172.19.47.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\id_rsa Username:docker}
	I0419 19:00:48.513231   14960 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7570266s)
	I0419 19:00:48.529082   14960 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 19:00:48.537839   14960 command_runner.go:130] > NAME=Buildroot
	I0419 19:00:48.537839   14960 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0419 19:00:48.537839   14960 command_runner.go:130] > ID=buildroot
	I0419 19:00:48.537839   14960 command_runner.go:130] > VERSION_ID=2023.02.9
	I0419 19:00:48.537839   14960 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0419 19:00:48.537839   14960 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 19:00:48.537839   14960 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0419 19:00:48.538375   14960 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0419 19:00:48.539495   14960 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> 34162.pem in /etc/ssl/certs
	I0419 19:00:48.539495   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /etc/ssl/certs/34162.pem
	I0419 19:00:48.553246   14960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 19:00:48.578189   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /etc/ssl/certs/34162.pem (1708 bytes)
	I0419 19:00:48.627075   14960 start.go:296] duration metric: took 4.8848625s for postStartSetup
	I0419 19:00:48.627075   14960 fix.go:56] duration metric: took 1m28.2795619s for fixHost
	I0419 19:00:48.627075   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:50.805935   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:50.806884   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:50.806884   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:53.447848   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:53.448572   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:53.454794   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 19:00:53.455480   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.34 22 <nil> <nil>}
	I0419 19:00:53.455480   14960 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0419 19:00:53.597030   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713578453.581321408
	
	I0419 19:00:53.597133   14960 fix.go:216] guest clock: 1713578453.581321408
	I0419 19:00:53.597133   14960 fix.go:229] Guest: 2024-04-19 19:00:53.581321408 -0700 PDT Remote: 2024-04-19 19:00:48.6270755 -0700 PDT m=+296.820333301 (delta=4.954245908s)
	I0419 19:00:53.597263   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:55.693712   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:55.694736   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:55.694796   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:58.238910   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:58.238910   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:58.245560   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 19:00:58.245884   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.34 22 <nil> <nil>}
	I0419 19:00:58.245884   14960 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713578453
	I0419 19:00:58.390249   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: Sat Apr 20 02:00:53 UTC 2024
	
	I0419 19:00:58.390302   14960 fix.go:236] clock set: Sat Apr 20 02:00:53 UTC 2024
	 (err=<nil>)
	I0419 19:00:58.390302   14960 start.go:83] releasing machines lock for "multinode-348000-m02", held for 1m38.0428837s
	I0419 19:00:58.390545   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:01:00.450003   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:01:00.450003   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:00.450117   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:01:03.040422   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:01:03.040768   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:03.044247   14960 out.go:177] * Found network options:
	I0419 19:01:03.046833   14960 out.go:177]   - NO_PROXY=172.19.42.24
	W0419 19:01:03.048991   14960 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 19:01:03.051262   14960 out.go:177]   - NO_PROXY=172.19.42.24
	W0419 19:01:03.053333   14960 proxy.go:119] fail to check proxy env: Error ip not in block
	W0419 19:01:03.054258   14960 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 19:01:03.057094   14960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 19:01:03.057094   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:01:03.067565   14960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0419 19:01:03.068567   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:01:05.208204   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:01:05.208701   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:05.208871   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:01:05.220683   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:01:05.220683   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:05.220683   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:01:07.832195   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:01:07.832195   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:07.832953   14960 sshutil.go:53] new ssh client: &{IP:172.19.47.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\id_rsa Username:docker}
	I0419 19:01:07.859035   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:01:07.859035   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:07.859982   14960 sshutil.go:53] new ssh client: &{IP:172.19.47.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\id_rsa Username:docker}
	I0419 19:01:08.053699   14960 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0419 19:01:08.053869   14960 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9966973s)
	I0419 19:01:08.053869   14960 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0419 19:01:08.053929   14960 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9853511s)
	W0419 19:01:08.054000   14960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 19:01:08.073960   14960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 19:01:08.108058   14960 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0419 19:01:08.108114   14960 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 19:01:08.108114   14960 start.go:494] detecting cgroup driver to use...
	I0419 19:01:08.108114   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 19:01:08.147428   14960 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0419 19:01:08.162147   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0419 19:01:08.197273   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0419 19:01:08.221559   14960 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0419 19:01:08.235303   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0419 19:01:08.269022   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 19:01:08.308858   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0419 19:01:08.352935   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 19:01:08.388625   14960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 19:01:08.425846   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0419 19:01:08.465683   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0419 19:01:08.501891   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0419 19:01:08.543670   14960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 19:01:08.563544   14960 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0419 19:01:08.578557   14960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 19:01:08.613027   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 19:01:08.842996   14960 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0419 19:01:08.882240   14960 start.go:494] detecting cgroup driver to use...
	I0419 19:01:08.898897   14960 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0419 19:01:08.928639   14960 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0419 19:01:08.928803   14960 command_runner.go:130] > [Unit]
	I0419 19:01:08.928848   14960 command_runner.go:130] > Description=Docker Application Container Engine
	I0419 19:01:08.928848   14960 command_runner.go:130] > Documentation=https://docs.docker.com
	I0419 19:01:08.928848   14960 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0419 19:01:08.928848   14960 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0419 19:01:08.928848   14960 command_runner.go:130] > StartLimitBurst=3
	I0419 19:01:08.928848   14960 command_runner.go:130] > StartLimitIntervalSec=60
	I0419 19:01:08.928848   14960 command_runner.go:130] > [Service]
	I0419 19:01:08.928848   14960 command_runner.go:130] > Type=notify
	I0419 19:01:08.928848   14960 command_runner.go:130] > Restart=on-failure
	I0419 19:01:08.928940   14960 command_runner.go:130] > Environment=NO_PROXY=172.19.42.24
	I0419 19:01:08.928940   14960 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0419 19:01:08.929007   14960 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0419 19:01:08.929045   14960 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0419 19:01:08.929045   14960 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0419 19:01:08.929103   14960 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0419 19:01:08.929103   14960 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0419 19:01:08.929130   14960 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0419 19:01:08.929186   14960 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0419 19:01:08.929186   14960 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0419 19:01:08.929186   14960 command_runner.go:130] > ExecStart=
	I0419 19:01:08.929186   14960 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0419 19:01:08.929186   14960 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0419 19:01:08.929186   14960 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0419 19:01:08.929186   14960 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0419 19:01:08.929186   14960 command_runner.go:130] > LimitNOFILE=infinity
	I0419 19:01:08.929186   14960 command_runner.go:130] > LimitNPROC=infinity
	I0419 19:01:08.929186   14960 command_runner.go:130] > LimitCORE=infinity
	I0419 19:01:08.929186   14960 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0419 19:01:08.929186   14960 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0419 19:01:08.929186   14960 command_runner.go:130] > TasksMax=infinity
	I0419 19:01:08.929186   14960 command_runner.go:130] > TimeoutStartSec=0
	I0419 19:01:08.929186   14960 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0419 19:01:08.929186   14960 command_runner.go:130] > Delegate=yes
	I0419 19:01:08.929186   14960 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0419 19:01:08.929186   14960 command_runner.go:130] > KillMode=process
	I0419 19:01:08.929186   14960 command_runner.go:130] > [Install]
	I0419 19:01:08.929186   14960 command_runner.go:130] > WantedBy=multi-user.target
	I0419 19:01:08.944507   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 19:01:08.989765   14960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 19:01:09.036757   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 19:01:09.080760   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 19:01:09.120826   14960 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0419 19:01:09.194341   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 19:01:09.221446   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 19:01:09.258347   14960 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0419 19:01:09.270335   14960 ssh_runner.go:195] Run: which cri-dockerd
	I0419 19:01:09.281338   14960 command_runner.go:130] > /usr/bin/cri-dockerd
	I0419 19:01:09.296395   14960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0419 19:01:09.317652   14960 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0419 19:01:09.369444   14960 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0419 19:01:09.591646   14960 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0419 19:01:09.791897   14960 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0419 19:01:09.792098   14960 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0419 19:01:09.842651   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 19:01:10.066054   14960 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 19:01:12.701497   14960 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.635438s)
	I0419 19:01:12.716637   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0419 19:01:12.761639   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 19:01:12.801948   14960 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0419 19:01:13.025145   14960 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0419 19:01:13.233611   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 19:01:13.454757   14960 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0419 19:01:13.502274   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 19:01:13.542691   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 19:01:13.791570   14960 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0419 19:01:13.917116   14960 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0419 19:01:13.927454   14960 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0419 19:01:13.946428   14960 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0419 19:01:13.946428   14960 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0419 19:01:13.946428   14960 command_runner.go:130] > Device: 0,22	Inode: 860         Links: 1
	I0419 19:01:13.946428   14960 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0419 19:01:13.946428   14960 command_runner.go:130] > Access: 2024-04-20 02:01:13.806811980 +0000
	I0419 19:01:13.946428   14960 command_runner.go:130] > Modify: 2024-04-20 02:01:13.806811980 +0000
	I0419 19:01:13.946428   14960 command_runner.go:130] > Change: 2024-04-20 02:01:13.810812117 +0000
	I0419 19:01:13.946428   14960 command_runner.go:130] >  Birth: -
	I0419 19:01:13.946428   14960 start.go:562] Will wait 60s for crictl version
	I0419 19:01:13.960453   14960 ssh_runner.go:195] Run: which crictl
	I0419 19:01:13.967237   14960 command_runner.go:130] > /usr/bin/crictl
	I0419 19:01:13.981372   14960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 19:01:14.042136   14960 command_runner.go:130] > Version:  0.1.0
	I0419 19:01:14.042270   14960 command_runner.go:130] > RuntimeName:  docker
	I0419 19:01:14.042270   14960 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0419 19:01:14.042270   14960 command_runner.go:130] > RuntimeApiVersion:  v1
	I0419 19:01:14.042373   14960 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0419 19:01:14.052180   14960 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 19:01:14.091495   14960 command_runner.go:130] > 26.0.1
	I0419 19:01:14.103244   14960 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 19:01:14.137426   14960 command_runner.go:130] > 26.0.1
	I0419 19:01:14.145035   14960 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0419 19:01:14.147657   14960 out.go:177]   - env NO_PROXY=172.19.42.24
	I0419 19:01:14.149658   14960 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0419 19:01:14.154656   14960 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0419 19:01:14.154656   14960 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0419 19:01:14.154656   14960 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0419 19:01:14.154656   14960 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8c:b9:25 Flags:up|broadcast|multicast|running}
	I0419 19:01:14.157661   14960 ip.go:210] interface addr: fe80::ce04:318e:a1d8:4460/64
	I0419 19:01:14.157661   14960 ip.go:210] interface addr: 172.19.32.1/20
	I0419 19:01:14.171677   14960 ssh_runner.go:195] Run: grep 172.19.32.1	host.minikube.internal$ /etc/hosts
	I0419 19:01:14.179110   14960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.32.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 19:01:14.202666   14960 mustload.go:65] Loading cluster: multinode-348000
	I0419 19:01:14.203401   14960 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 19:01:14.204153   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 19:01:16.329740   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:01:16.330191   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:16.330191   14960 host.go:66] Checking if "multinode-348000" exists ...
	I0419 19:01:16.330863   14960 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000 for IP: 172.19.47.34
	I0419 19:01:16.330863   14960 certs.go:194] generating shared ca certs ...
	I0419 19:01:16.330863   14960 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 19:01:16.331414   14960 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0419 19:01:16.331666   14960 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0419 19:01:16.331666   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 19:01:16.332342   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0419 19:01:16.332530   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 19:01:16.332769   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 19:01:16.332769   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem (1338 bytes)
	W0419 19:01:16.333349   14960 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416_empty.pem, impossibly tiny 0 bytes
	I0419 19:01:16.333582   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0419 19:01:16.333793   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0419 19:01:16.333793   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0419 19:01:16.333793   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0419 19:01:16.335039   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem (1708 bytes)
	I0419 19:01:16.335270   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 19:01:16.335504   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem -> /usr/share/ca-certificates/3416.pem
	I0419 19:01:16.335693   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /usr/share/ca-certificates/34162.pem
	I0419 19:01:16.335693   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 19:01:16.399108   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 19:01:16.450867   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 19:01:16.506333   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 19:01:16.556601   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 19:01:16.614342   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem --> /usr/share/ca-certificates/3416.pem (1338 bytes)
	I0419 19:01:16.661285   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /usr/share/ca-certificates/34162.pem (1708 bytes)
	I0419 19:01:16.733715   14960 ssh_runner.go:195] Run: openssl version
	I0419 19:01:16.745380   14960 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0419 19:01:16.760333   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 19:01:16.798285   14960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 19:01:16.806669   14960 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 20 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I0419 19:01:16.806669   14960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 20 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I0419 19:01:16.821616   14960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 19:01:16.830618   14960 command_runner.go:130] > b5213941
	I0419 19:01:16.844377   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 19:01:16.879247   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3416.pem && ln -fs /usr/share/ca-certificates/3416.pem /etc/ssl/certs/3416.pem"
	I0419 19:01:16.914700   14960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3416.pem
	I0419 19:01:16.923267   14960 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 20 00:10 /usr/share/ca-certificates/3416.pem
	I0419 19:01:16.924204   14960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:10 /usr/share/ca-certificates/3416.pem
	I0419 19:01:16.937060   14960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3416.pem
	I0419 19:01:16.946404   14960 command_runner.go:130] > 51391683
	I0419 19:01:16.960456   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3416.pem /etc/ssl/certs/51391683.0"
	I0419 19:01:16.997669   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34162.pem && ln -fs /usr/share/ca-certificates/34162.pem /etc/ssl/certs/34162.pem"
	I0419 19:01:17.033682   14960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34162.pem
	I0419 19:01:17.041522   14960 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 20 00:10 /usr/share/ca-certificates/34162.pem
	I0419 19:01:17.041522   14960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:10 /usr/share/ca-certificates/34162.pem
	I0419 19:01:17.055348   14960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34162.pem
	I0419 19:01:17.065520   14960 command_runner.go:130] > 3ec20f2e
	I0419 19:01:17.079279   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34162.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 19:01:17.116414   14960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 19:01:17.123098   14960 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 19:01:17.124706   14960 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 19:01:17.124920   14960 kubeadm.go:928] updating node {m02 172.19.47.34 8443 v1.30.0 docker false true} ...
	I0419 19:01:17.125141   14960 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-348000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.47.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-348000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 19:01:17.138352   14960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 19:01:17.160399   14960 command_runner.go:130] > kubeadm
	I0419 19:01:17.160399   14960 command_runner.go:130] > kubectl
	I0419 19:01:17.160399   14960 command_runner.go:130] > kubelet
	I0419 19:01:17.160399   14960 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 19:01:17.174019   14960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0419 19:01:17.194262   14960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0419 19:01:17.229251   14960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 19:01:17.279087   14960 ssh_runner.go:195] Run: grep 172.19.42.24	control-plane.minikube.internal$ /etc/hosts
	I0419 19:01:17.286304   14960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.42.24	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 19:01:17.324868   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 19:01:17.536268   14960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 19:01:17.572578   14960 host.go:66] Checking if "multinode-348000" exists ...
	I0419 19:01:17.573436   14960 start.go:316] joinCluster: &{Name:multinode-348000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-348000 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.42.24 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.47.34 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.37.59 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisione
r:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 19:01:17.573651   14960 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.19.47.34 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0419 19:01:17.573725   14960 host.go:66] Checking if "multinode-348000-m02" exists ...
	I0419 19:01:17.574300   14960 mustload.go:65] Loading cluster: multinode-348000
	I0419 19:01:17.574781   14960 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 19:01:17.575387   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 19:01:19.772194   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:01:19.772194   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:19.772194   14960 host.go:66] Checking if "multinode-348000" exists ...
	I0419 19:01:19.773657   14960 api_server.go:166] Checking apiserver status ...
	I0419 19:01:19.792360   14960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 19:01:19.792360   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 19:01:21.959590   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:01:21.959590   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:21.959590   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 19:01:24.565929   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 19:01:24.565929   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:24.566380   14960 sshutil.go:53] new ssh client: &{IP:172.19.42.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\id_rsa Username:docker}
	I0419 19:01:24.680711   14960 command_runner.go:130] > 1877
	I0419 19:01:24.680711   14960 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.8883404s)
	I0419 19:01:24.694244   14960 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1877/cgroup
	W0419 19:01:24.714312   14960 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1877/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 19:01:24.728594   14960 ssh_runner.go:195] Run: ls
	I0419 19:01:24.741144   14960 api_server.go:253] Checking apiserver healthz at https://172.19.42.24:8443/healthz ...
	I0419 19:01:24.749114   14960 api_server.go:279] https://172.19.42.24:8443/healthz returned 200:
	ok
	I0419 19:01:24.762494   14960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl drain multinode-348000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0419 19:01:24.921133   14960 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-s98rh, kube-system/kube-proxy-bjv9b
	I0419 19:01:27.962842   14960 command_runner.go:130] > node/multinode-348000-m02 cordoned
	I0419 19:01:27.962842   14960 command_runner.go:130] > pod "busybox-fc5497c4f-2d5hs" has DeletionTimestamp older than 1 seconds, skipping
	I0419 19:01:27.962842   14960 command_runner.go:130] > node/multinode-348000-m02 drained
	I0419 19:01:27.962842   14960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl drain multinode-348000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.200341s)
	I0419 19:01:27.962842   14960 node.go:128] successfully drained node "multinode-348000-m02"
	I0419 19:01:27.962842   14960 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0419 19:01:27.962842   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:01:30.126646   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:01:30.126646   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:30.127588   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:01:32.772503   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:01:32.772634   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:32.772777   14960 sshutil.go:53] new ssh client: &{IP:172.19.47.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\id_rsa Username:docker}
	I0419 19:01:33.271059   14960 command_runner.go:130] ! W0420 02:01:33.258193    1546 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0419 19:01:33.892281   14960 command_runner.go:130] ! W0420 02:01:33.879473    1546 cleanupnode.go:106] [reset] Failed to remove containers: failed to stop running pod a8f6b8169c72cdcce217a8588db0863a6d44839a0a40fadcb1e83f6c0b93ade3: output: E0420 02:01:33.527603    1582 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-2d5hs_default\" network: cni config uninitialized" podSandboxID="a8f6b8169c72cdcce217a8588db0863a6d44839a0a40fadcb1e83f6c0b93ade3"
	I0419 19:01:33.892334   14960 command_runner.go:130] ! time="2024-04-20T02:01:33Z" level=fatal msg="stopping the pod sandbox \"a8f6b8169c72cdcce217a8588db0863a6d44839a0a40fadcb1e83f6c0b93ade3\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-2d5hs_default\" network: cni config uninitialized"
	I0419 19:01:33.892334   14960 command_runner.go:130] ! : exit status 1
	I0419 19:01:33.919921   14960 command_runner.go:130] > [preflight] Running pre-flight checks
	I0419 19:01:33.920035   14960 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0419 19:01:33.920035   14960 command_runner.go:130] > [reset] Stopping the kubelet service
	I0419 19:01:33.920035   14960 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0419 19:01:33.920114   14960 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0419 19:01:33.920114   14960 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0419 19:01:33.920114   14960 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0419 19:01:33.920114   14960 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0419 19:01:33.920114   14960 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0419 19:01:33.920114   14960 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0419 19:01:33.920114   14960 command_runner.go:130] > to reset your system's IPVS tables.
	I0419 19:01:33.920114   14960 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0419 19:01:33.920114   14960 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0419 19:01:33.920114   14960 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.9572599s)
	I0419 19:01:33.920114   14960 node.go:155] successfully reset node "multinode-348000-m02"
	I0419 19:01:33.921684   14960 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 19:01:33.921751   14960 kapi.go:59] client config for multinode-348000: &rest.Config{Host:"https://172.19.42.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-348000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-348000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c35620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 19:01:33.923072   14960 cert_rotation.go:137] Starting client certificate rotation controller
	I0419 19:01:33.923889   14960 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0419 19:01:33.924013   14960 round_trippers.go:463] DELETE https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:33.924048   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:33.924048   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:33.924079   14960 round_trippers.go:473]     Content-Type: application/json
	I0419 19:01:33.924079   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:33.941110   14960 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0419 19:01:33.941110   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:33.941110   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:33.941191   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:33.941191   14960 round_trippers.go:580]     Content-Length: 171
	I0419 19:01:33.941191   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:33 GMT
	I0419 19:01:33.941290   14960 round_trippers.go:580]     Audit-Id: 1d74b676-1386-4baf-a7a5-6c73d15d4038
	I0419 19:01:33.941290   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:33.941290   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:33.941340   14960 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-348000-m02","kind":"nodes","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608"}}
	I0419 19:01:33.941340   14960 node.go:180] successfully deleted node "multinode-348000-m02"
	I0419 19:01:33.941440   14960 start.go:333] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.19.47.34 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0419 19:01:33.941508   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0419 19:01:33.941585   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 19:01:36.054151   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:01:36.054151   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:36.054293   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 19:01:38.625022   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 19:01:38.626060   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:38.626313   14960 sshutil.go:53] new ssh client: &{IP:172.19.42.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\id_rsa Username:docker}
	I0419 19:01:38.824137   14960 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token lulnn1.bllunk0142pxcua8 --discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 
	I0419 19:01:38.824137   14960 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.882619s)
	I0419 19:01:38.824273   14960 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.19.47.34 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0419 19:01:38.824312   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lulnn1.bllunk0142pxcua8 --discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-348000-m02"
	I0419 19:01:39.058259   14960 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0419 19:01:40.452720   14960 command_runner.go:130] > [preflight] Running pre-flight checks
	I0419 19:01:40.452866   14960 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0419 19:01:40.452866   14960 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0419 19:01:40.452866   14960 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 19:01:40.452866   14960 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 19:01:40.452866   14960 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0419 19:01:40.452929   14960 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0419 19:01:40.452929   14960 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002344373s
	I0419 19:01:40.452987   14960 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0419 19:01:40.452987   14960 command_runner.go:130] > This node has joined the cluster:
	I0419 19:01:40.453015   14960 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0419 19:01:40.453015   14960 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0419 19:01:40.453015   14960 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0419 19:01:40.453087   14960 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lulnn1.bllunk0142pxcua8 --discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-348000-m02": (1.6287717s)
	I0419 19:01:40.453267   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0419 19:01:40.678777   14960 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0419 19:01:40.900769   14960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-348000-m02 minikube.k8s.io/updated_at=2024_04_19T19_01_40_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=multinode-348000 minikube.k8s.io/primary=false
	I0419 19:01:41.055068   14960 command_runner.go:130] > node/multinode-348000-m02 labeled
	I0419 19:01:41.055068   14960 start.go:318] duration metric: took 23.4815828s to joinCluster
	I0419 19:01:41.055068   14960 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.19.47.34 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0419 19:01:41.063162   14960 out.go:177] * Verifying Kubernetes components...
	I0419 19:01:41.059055   14960 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 19:01:41.080884   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 19:01:41.300370   14960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 19:01:41.331316   14960 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 19:01:41.332113   14960 kapi.go:59] client config for multinode-348000: &rest.Config{Host:"https://172.19.42.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-348000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-348000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c35620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 19:01:41.333067   14960 node_ready.go:35] waiting up to 6m0s for node "multinode-348000-m02" to be "Ready" ...
	I0419 19:01:41.333216   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:41.333216   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:41.333216   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:41.333216   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:41.337635   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 19:01:41.337706   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:41.337706   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:41.337706   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:41.337706   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:41.337791   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:41.337791   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:41 GMT
	I0419 19:01:41.337791   14960 round_trippers.go:580]     Audit-Id: 66e04f6c-f89c-48a7-aa9b-f0859b332d37
	I0419 19:01:41.338090   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2104","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0419 19:01:41.834693   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:41.834765   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:41.834765   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:41.834765   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:41.838155   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:41.838709   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:41.838709   14960 round_trippers.go:580]     Audit-Id: 0d452dcb-2520-4e2c-a48f-d3784908f2bc
	I0419 19:01:41.838709   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:41.838709   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:41.838709   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:41.838820   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:41.838820   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:41 GMT
	I0419 19:01:41.839012   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2104","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0419 19:01:42.338826   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:42.338887   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:42.338887   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:42.338887   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:42.346387   14960 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 19:01:42.346387   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:42.346387   14960 round_trippers.go:580]     Audit-Id: 23da4afe-332a-4a27-81e6-af4580e224e9
	I0419 19:01:42.346387   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:42.346387   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:42.346387   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:42.346387   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:42.346387   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:42 GMT
	I0419 19:01:42.347351   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2104","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0419 19:01:42.836832   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:42.836832   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:42.836832   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:42.836832   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:42.843828   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 19:01:42.843828   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:42.843828   14960 round_trippers.go:580]     Audit-Id: 4b28280e-c168-4a6d-8a76-5320f2bce41e
	I0419 19:01:42.843828   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:42.843828   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:42.843828   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:42.843828   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:42.843828   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:42 GMT
	I0419 19:01:42.843828   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2104","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0419 19:01:43.346845   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:43.346913   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:43.346913   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:43.346913   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:43.350324   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:43.351221   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:43.351221   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:43.351221   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:43.351303   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:43.351303   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:43 GMT
	I0419 19:01:43.351326   14960 round_trippers.go:580]     Audit-Id: 2d042065-5fe2-4aac-ae0f-1879cb2ee98b
	I0419 19:01:43.351326   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:43.351642   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2104","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0419 19:01:43.351759   14960 node_ready.go:53] node "multinode-348000-m02" has status "Ready":"False"
	I0419 19:01:43.838176   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:43.838239   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:43.838287   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:43.838287   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:43.845809   14960 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 19:01:43.845809   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:43.845809   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:43.845809   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:43.845809   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:43.845809   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:43.845809   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:43 GMT
	I0419 19:01:43.845809   14960 round_trippers.go:580]     Audit-Id: 5b6c0e89-4b9f-4e1e-b63c-45ba9f620b06
	I0419 19:01:43.846457   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:44.341171   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:44.341171   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:44.341171   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:44.341171   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:44.348316   14960 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 19:01:44.348316   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:44.348316   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:44.348316   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:44.348316   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:44.348316   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:44 GMT
	I0419 19:01:44.348316   14960 round_trippers.go:580]     Audit-Id: 3e0924df-697a-4fcf-8e5c-08800e1ddff8
	I0419 19:01:44.348316   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:44.348316   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:44.840070   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:44.840220   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:44.840220   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:44.840303   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:44.844591   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 19:01:44.844591   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:44.844591   14960 round_trippers.go:580]     Audit-Id: e0ff0c7a-45c3-4139-8873-86f79a227ade
	I0419 19:01:44.844591   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:44.845220   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:44.845220   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:44.845220   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:44.845270   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:44 GMT
	I0419 19:01:44.845485   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:45.337160   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:45.337160   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:45.337160   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:45.337160   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:45.345619   14960 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 19:01:45.345619   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:45.345619   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:45.345619   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:45.345619   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:45 GMT
	I0419 19:01:45.345619   14960 round_trippers.go:580]     Audit-Id: c9ee900d-f027-4b4b-b47a-d928341aefc4
	I0419 19:01:45.345619   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:45.346017   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:45.346561   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:45.839469   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:45.839580   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:45.839580   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:45.839580   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:45.843114   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:45.843114   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:45.843114   14960 round_trippers.go:580]     Audit-Id: 94242ce1-9def-442e-a607-ccda8bb10bed
	I0419 19:01:45.843114   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:45.843114   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:45.843114   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:45.843114   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:45.843114   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:45 GMT
	I0419 19:01:45.843481   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:45.844025   14960 node_ready.go:53] node "multinode-348000-m02" has status "Ready":"False"
	I0419 19:01:46.340968   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:46.341026   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:46.341026   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:46.341026   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:46.345623   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 19:01:46.345717   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:46.345717   14960 round_trippers.go:580]     Audit-Id: f2532663-d9cb-4dff-933c-c87ef1778a1f
	I0419 19:01:46.345717   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:46.345717   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:46.345717   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:46.345717   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:46.345717   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:46 GMT
	I0419 19:01:46.345908   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:46.833622   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:46.833622   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:46.833698   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:46.833698   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:46.838617   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 19:01:46.838617   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:46.838617   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:46.838617   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:46.838617   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:46.839042   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:46 GMT
	I0419 19:01:46.839042   14960 round_trippers.go:580]     Audit-Id: 548d8383-10c0-4a60-baa3-7f4a28fb91b3
	I0419 19:01:46.839042   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:46.839106   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:47.334238   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:47.334315   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:47.334315   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:47.334315   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:47.338183   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:47.338183   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:47.338818   14960 round_trippers.go:580]     Audit-Id: 1fbae893-1952-42ae-bcf9-4f77dbb6dc4a
	I0419 19:01:47.338818   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:47.338818   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:47.338818   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:47.338818   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:47.338818   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:47 GMT
	I0419 19:01:47.338990   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:47.834984   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:47.835063   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:47.835063   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:47.835063   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:47.841019   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 19:01:47.841019   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:47.841019   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:47.841019   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:47.841019   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:47 GMT
	I0419 19:01:47.841019   14960 round_trippers.go:580]     Audit-Id: d1d49fdb-64d0-4024-9496-4daf13ceea8f
	I0419 19:01:47.841019   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:47.841019   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:47.841555   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:48.336497   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:48.336735   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:48.336735   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:48.336735   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:48.339794   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:48.340569   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:48.340569   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:48.340569   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:48.340569   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:48 GMT
	I0419 19:01:48.340569   14960 round_trippers.go:580]     Audit-Id: c1059c42-86d1-4643-a47d-cd9285b13341
	I0419 19:01:48.340569   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:48.340569   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:48.340741   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:48.340741   14960 node_ready.go:53] node "multinode-348000-m02" has status "Ready":"False"
	I0419 19:01:48.834374   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:48.834374   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:48.834374   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:48.834374   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:48.838010   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:48.838010   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:48.838010   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:48.838010   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:48.838010   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:48 GMT
	I0419 19:01:48.838010   14960 round_trippers.go:580]     Audit-Id: 1c60fd3b-22be-44c1-9567-490dd33e5fb2
	I0419 19:01:48.838010   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:48.838336   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:48.838624   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:49.345940   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:49.345940   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.345940   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.345940   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.350590   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 19:01:49.350914   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.350914   14960 round_trippers.go:580]     Audit-Id: 2f82e4f6-daeb-4eb6-8ed2-0a0c68ac3d64
	I0419 19:01:49.350914   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.350914   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.350914   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.350914   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.350914   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.351149   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2135","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0419 19:01:49.351691   14960 node_ready.go:49] node "multinode-348000-m02" has status "Ready":"True"
	I0419 19:01:49.351691   14960 node_ready.go:38] duration metric: took 8.0186071s for node "multinode-348000-m02" to be "Ready" ...
	I0419 19:01:49.351691   14960 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 19:01:49.351816   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods
	I0419 19:01:49.351922   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.351922   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.351922   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.357090   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 19:01:49.357090   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.357090   14960 round_trippers.go:580]     Audit-Id: 3ca6cfc8-a637-43d9-80c9-acf9e9398fed
	I0419 19:01:49.357579   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.357579   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.357579   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.357579   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.357579   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.359726   14960 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2137"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1944","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86034 chars]
	I0419 19:01:49.363493   14960 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:49.363493   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 19:01:49.363493   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.363493   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.363493   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.367195   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:49.367195   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.367195   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.367195   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.367195   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.367195   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.367195   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.367195   14960 round_trippers.go:580]     Audit-Id: f9f5efc5-051b-411b-b390-d5a07dfd1655
	I0419 19:01:49.367680   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1944","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6786 chars]
	I0419 19:01:49.368383   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 19:01:49.368383   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.368383   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.368433   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.371250   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 19:01:49.371250   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.371250   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.371250   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.371250   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.371250   14960 round_trippers.go:580]     Audit-Id: 98f36e51-1dac-4a71-a229-f685771b545b
	I0419 19:01:49.371250   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.371250   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.372422   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 19:01:49.372475   14960 pod_ready.go:92] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"True"
	I0419 19:01:49.372475   14960 pod_ready.go:81] duration metric: took 8.9819ms for pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:49.372475   14960 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:49.372475   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-348000
	I0419 19:01:49.372475   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.372475   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.372475   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.376623   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:49.376680   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.376680   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.376680   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.376680   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.376680   14960 round_trippers.go:580]     Audit-Id: a655482b-dcbc-4e08-831f-f9a829493409
	I0419 19:01:49.376680   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.376816   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.376863   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-348000","namespace":"kube-system","uid":"33702588-cdf3-4577-b18d-18415cca2c25","resourceVersion":"1836","creationTimestamp":"2024-04-20T01:58:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.42.24:2379","kubernetes.io/config.hash":"c0cfa3da6a3913c3e67500f6c3e9d72b","kubernetes.io/config.mirror":"c0cfa3da6a3913c3e67500f6c3e9d72b","kubernetes.io/config.seen":"2024-04-20T01:57:55.099346749Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:58:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6149 chars]
	I0419 19:01:49.377407   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 19:01:49.377407   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.377407   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.377407   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.381033   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:49.381033   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.381033   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.381033   14960 round_trippers.go:580]     Audit-Id: adef41ac-4d59-4d1c-9d43-4c2f73229310
	I0419 19:01:49.381033   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.381033   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.381033   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.381033   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.381800   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 19:01:49.381800   14960 pod_ready.go:92] pod "etcd-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 19:01:49.381800   14960 pod_ready.go:81] duration metric: took 9.325ms for pod "etcd-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:49.381800   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:49.381800   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-348000
	I0419 19:01:49.382355   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.382355   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.382422   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.385941   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:49.385941   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.385941   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.385941   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.385941   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.385941   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.386740   14960 round_trippers.go:580]     Audit-Id: 2c195bc7-d84e-4c7f-98ef-27af298a02f6
	I0419 19:01:49.386740   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.386974   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-348000","namespace":"kube-system","uid":"13adbf1b-6c17-47a9-951d-2481680a47bd","resourceVersion":"1823","creationTimestamp":"2024-04-20T01:58:01Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.42.24:8443","kubernetes.io/config.hash":"af7a3c9321ace7e2a933260472b90113","kubernetes.io/config.mirror":"af7a3c9321ace7e2a933260472b90113","kubernetes.io/config.seen":"2024-04-20T01:57:55.026086199Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:58:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7685 chars]
	I0419 19:01:49.387536   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 19:01:49.387536   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.387536   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.387608   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.389803   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 19:01:49.389803   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.389803   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.389803   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.389803   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.389803   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.389803   14960 round_trippers.go:580]     Audit-Id: d637f7f2-9a30-474d-bd31-d40f71eb0cef
	I0419 19:01:49.389803   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.389803   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 19:01:49.391759   14960 pod_ready.go:92] pod "kube-apiserver-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 19:01:49.391817   14960 pod_ready.go:81] duration metric: took 10.0173ms for pod "kube-apiserver-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:49.391817   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:49.391933   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-348000
	I0419 19:01:49.391933   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.391933   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.391933   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.395098   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:49.395098   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.395098   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.395098   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.395098   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.395098   14960 round_trippers.go:580]     Audit-Id: 7f468c5c-827e-4301-87bb-c2cbe94d6257
	I0419 19:01:49.395098   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.395098   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.395517   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-348000","namespace":"kube-system","uid":"299bb088-9795-4452-87a8-5e96bcacedde","resourceVersion":"1829","creationTimestamp":"2024-04-20T01:35:08Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"30aa2729d0c65b9f89e1ae2d151edd9b","kubernetes.io/config.mirror":"30aa2729d0c65b9f89e1ae2d151edd9b","kubernetes.io/config.seen":"2024-04-20T01:35:08.321898260Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0419 19:01:49.396243   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 19:01:49.396243   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.396243   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.396243   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.398549   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 19:01:49.398549   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.398549   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.398549   14960 round_trippers.go:580]     Audit-Id: fc959da9-7795-49a2-b1ec-b182563f5705
	I0419 19:01:49.398549   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.399314   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.399314   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.399314   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.399607   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 19:01:49.399776   14960 pod_ready.go:92] pod "kube-controller-manager-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 19:01:49.399776   14960 pod_ready.go:81] duration metric: took 7.9587ms for pod "kube-controller-manager-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:49.399776   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2jjsq" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:49.548882   14960 request.go:629] Waited for 149.1059ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2jjsq
	I0419 19:01:49.548882   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2jjsq
	I0419 19:01:49.548882   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.548882   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.548882   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.553533   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 19:01:49.553533   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.553533   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.553533   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.553533   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.553533   14960 round_trippers.go:580]     Audit-Id: 6d9624a1-a9f9-4ea9-8b3d-162112f9c72a
	I0419 19:01:49.553533   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.553533   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.554222   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2jjsq","generateName":"kube-proxy-","namespace":"kube-system","uid":"f9666ab7-0d1f-4800-b979-6e38fecdc518","resourceVersion":"1708","creationTimestamp":"2024-04-20T01:42:52Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:42:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0419 19:01:49.751680   14960 request.go:629] Waited for 196.735ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m03
	I0419 19:01:49.751902   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m03
	I0419 19:01:49.751999   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.751999   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.752053   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.759773   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:49.759866   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.759866   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.759933   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.759933   14960 round_trippers.go:580]     Audit-Id: 0b602dda-32d4-48c8-a880-e24545726ec5
	I0419 19:01:49.759933   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.759933   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.759933   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.760161   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m03","uid":"08bfca2d-b382-4052-a5b6-0a78bee7caef","resourceVersion":"1871","creationTimestamp":"2024-04-20T01:53:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_53_29_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:53:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4398 chars]
	I0419 19:01:49.760269   14960 pod_ready.go:97] node "multinode-348000-m03" hosting pod "kube-proxy-2jjsq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000-m03" has status "Ready":"Unknown"
	I0419 19:01:49.760817   14960 pod_ready.go:81] duration metric: took 361.0405ms for pod "kube-proxy-2jjsq" in "kube-system" namespace to be "Ready" ...
	E0419 19:01:49.760817   14960 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348000-m03" hosting pod "kube-proxy-2jjsq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000-m03" has status "Ready":"Unknown"
	I0419 19:01:49.760817   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bjv9b" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:49.954183   14960 request.go:629] Waited for 193.1754ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bjv9b
	I0419 19:01:49.954458   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bjv9b
	I0419 19:01:49.954458   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.954458   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.954458   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.958169   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:49.958169   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.958169   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.958169   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.958169   14960 round_trippers.go:580]     Audit-Id: 295b34b8-91d4-4588-9356-40f2469ffd00
	I0419 19:01:49.958169   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.958169   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.958169   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.960223   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bjv9b","generateName":"kube-proxy-","namespace":"kube-system","uid":"3e909d14-543a-4734-8c17-7e2b8188553d","resourceVersion":"2116","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0419 19:01:50.157140   14960 request.go:629] Waited for 195.6834ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:50.157140   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:50.157140   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:50.157140   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:50.157140   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:50.161738   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 19:01:50.161738   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:50.161738   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:50.161738   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:50.161738   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:50.161995   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:50 GMT
	I0419 19:01:50.161995   14960 round_trippers.go:580]     Audit-Id: 1cf63281-c046-49fe-ba39-ac73ff5f9bd6
	I0419 19:01:50.161995   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:50.162265   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2135","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0419 19:01:50.162739   14960 pod_ready.go:92] pod "kube-proxy-bjv9b" in "kube-system" namespace has status "Ready":"True"
	I0419 19:01:50.162739   14960 pod_ready.go:81] duration metric: took 401.9205ms for pod "kube-proxy-bjv9b" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:50.162739   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kj76x" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:50.346335   14960 request.go:629] Waited for 183.1332ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kj76x
	I0419 19:01:50.346412   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kj76x
	I0419 19:01:50.346492   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:50.346492   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:50.346492   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:50.354744   14960 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 19:01:50.355763   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:50.355763   14960 round_trippers.go:580]     Audit-Id: e1f21d5b-ad88-407d-9210-0ed3613da2ca
	I0419 19:01:50.355763   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:50.355763   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:50.355763   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:50.355763   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:50.355763   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:50 GMT
	I0419 19:01:50.355763   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kj76x","generateName":"kube-proxy-","namespace":"kube-system","uid":"274342c4-c21f-4279-b0ea-743d8e2c1463","resourceVersion":"1750","creationTimestamp":"2024-04-20T01:35:22Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0419 19:01:50.549756   14960 request.go:629] Waited for 193.2869ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 19:01:50.550059   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 19:01:50.550059   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:50.550216   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:50.550216   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:50.556750   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 19:01:50.556750   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:50.556750   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:50.556750   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:50.556750   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:50 GMT
	I0419 19:01:50.556750   14960 round_trippers.go:580]     Audit-Id: a37d225f-38b9-49da-b605-7e1f17b98f91
	I0419 19:01:50.556750   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:50.556750   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:50.557477   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 19:01:50.557532   14960 pod_ready.go:92] pod "kube-proxy-kj76x" in "kube-system" namespace has status "Ready":"True"
	I0419 19:01:50.557532   14960 pod_ready.go:81] duration metric: took 394.7928ms for pod "kube-proxy-kj76x" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:50.557532   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:50.754765   14960 request.go:629] Waited for 196.6075ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348000
	I0419 19:01:50.754765   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348000
	I0419 19:01:50.754765   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:50.755000   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:50.755000   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:50.761472   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 19:01:50.761472   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:50.761472   14960 round_trippers.go:580]     Audit-Id: 46d191e6-cfc8-48b4-a234-f1551e962def
	I0419 19:01:50.761472   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:50.761472   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:50.761472   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:50.761472   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:50.761472   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:50 GMT
	I0419 19:01:50.762447   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-348000","namespace":"kube-system","uid":"000cfafe-a513-4738-9de2-3c25244b72be","resourceVersion":"1824","creationTimestamp":"2024-04-20T01:35:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"92813b2aed63b63058d3fd06709fa24e","kubernetes.io/config.mirror":"92813b2aed63b63058d3fd06709fa24e","kubernetes.io/config.seen":"2024-04-20T01:35:08.321899460Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0419 19:01:50.958186   14960 request.go:629] Waited for 195.1212ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 19:01:50.958432   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 19:01:50.958506   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:50.958506   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:50.958530   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:50.966027   14960 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 19:01:50.966027   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:50.966027   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:50.966027   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:50.966027   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:50.966027   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:50 GMT
	I0419 19:01:50.966027   14960 round_trippers.go:580]     Audit-Id: 8fcc1a33-8891-4447-9ca2-2e5d82fc4890
	I0419 19:01:50.966027   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:50.966027   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 19:01:50.967345   14960 pod_ready.go:92] pod "kube-scheduler-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 19:01:50.967345   14960 pod_ready.go:81] duration metric: took 409.8114ms for pod "kube-scheduler-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:50.967345   14960 pod_ready.go:38] duration metric: took 1.6156507s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 19:01:50.967345   14960 system_svc.go:44] waiting for kubelet service to be running ....
	I0419 19:01:50.985273   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 19:01:51.012332   14960 system_svc.go:56] duration metric: took 44.9607ms WaitForService to wait for kubelet
	I0419 19:01:51.012332   14960 kubeadm.go:576] duration metric: took 9.9572433s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 19:01:51.012332   14960 node_conditions.go:102] verifying NodePressure condition ...
	I0419 19:01:51.146229   14960 request.go:629] Waited for 133.7259ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes
	I0419 19:01:51.146549   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes
	I0419 19:01:51.146549   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:51.146549   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:51.146549   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:51.151158   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 19:01:51.151158   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:51.151158   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:51 GMT
	I0419 19:01:51.151633   14960 round_trippers.go:580]     Audit-Id: 1fe3e0d5-02c4-4ea7-b6c2-3ea2d67236ac
	I0419 19:01:51.151633   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:51.151633   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:51.151633   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:51.151633   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:51.152472   14960 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2140"},"items":[{"metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15604 chars]
	I0419 19:01:51.153327   14960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 19:01:51.153436   14960 node_conditions.go:123] node cpu capacity is 2
	I0419 19:01:51.153436   14960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 19:01:51.153436   14960 node_conditions.go:123] node cpu capacity is 2
	I0419 19:01:51.153436   14960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 19:01:51.153436   14960 node_conditions.go:123] node cpu capacity is 2
	I0419 19:01:51.153436   14960 node_conditions.go:105] duration metric: took 141.1038ms to run NodePressure ...
	I0419 19:01:51.153436   14960 start.go:240] waiting for startup goroutines ...
	I0419 19:01:51.153542   14960 start.go:254] writing updated cluster config ...
	I0419 19:01:51.157851   14960 out.go:177] 
	I0419 19:01:51.160844   14960 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 19:01:51.169814   14960 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 19:01:51.169814   14960 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\config.json ...
	I0419 19:01:51.175642   14960 out.go:177] * Starting "multinode-348000-m03" worker node in "multinode-348000" cluster
	I0419 19:01:51.178973   14960 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 19:01:51.178973   14960 cache.go:56] Caching tarball of preloaded images
	I0419 19:01:51.179316   14960 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0419 19:01:51.179316   14960 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 19:01:51.179839   14960 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\config.json ...
	I0419 19:01:51.188191   14960 start.go:360] acquireMachinesLock for multinode-348000-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 19:01:51.188191   14960 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-348000-m03"
	I0419 19:01:51.188191   14960 start.go:96] Skipping create...Using existing machine configuration
	I0419 19:01:51.188191   14960 fix.go:54] fixHost starting: m03
	I0419 19:01:51.188913   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m03 ).state
	I0419 19:01:53.263702   14960 main.go:141] libmachine: [stdout =====>] : Off
	
	I0419 19:01:53.264538   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:53.264538   14960 fix.go:112] recreateIfNeeded on multinode-348000-m03: state=Stopped err=<nil>
	W0419 19:01:53.264538   14960 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 19:01:53.267585   14960 out.go:177] * Restarting existing hyperv VM for "multinode-348000-m03" ...
	I0419 19:01:53.270855   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-348000-m03
	I0419 19:01:56.370046   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 19:01:56.370792   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:56.370792   14960 main.go:141] libmachine: Waiting for host to start...
	I0419 19:01:56.370792   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m03 ).state
	I0419 19:01:58.536828   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:01:58.536828   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:58.548256   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 19:02:01.029129   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 19:02:01.033553   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:02:02.037128   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m03 ).state
	I0419 19:02:04.119846   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:02:04.122288   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:02:04.122288   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 19:02:06.581337   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 19:02:06.581337   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:02:07.590709   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m03 ).state

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-348000" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-348000
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-348000: context deadline exceeded (0s)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-348000" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-348000	172.19.42.231
multinode-348000-m02	172.19.32.249
multinode-348000-m03	172.19.37.59

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-348000 -n multinode-348000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-348000 -n multinode-348000: (11.8459341s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 logs -n 25: (10.6825943s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-348000 cp testdata\cp-test.txt                                                                                 | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:46 PDT | 19 Apr 24 18:46 PDT |
	|         | multinode-348000-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-348000 ssh -n                                                                                                  | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:46 PDT | 19 Apr 24 18:46 PDT |
	|         | multinode-348000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-348000 cp multinode-348000-m02:/home/docker/cp-test.txt                                                        | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:46 PDT | 19 Apr 24 18:46 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1378212137\001\cp-test_multinode-348000-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-348000 ssh -n                                                                                                  | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:46 PDT | 19 Apr 24 18:46 PDT |
	|         | multinode-348000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-348000 cp multinode-348000-m02:/home/docker/cp-test.txt                                                        | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:47 PDT | 19 Apr 24 18:47 PDT |
	|         | multinode-348000:/home/docker/cp-test_multinode-348000-m02_multinode-348000.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-348000 ssh -n                                                                                                  | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:47 PDT | 19 Apr 24 18:47 PDT |
	|         | multinode-348000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-348000 ssh -n multinode-348000 sudo cat                                                                        | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:47 PDT | 19 Apr 24 18:47 PDT |
	|         | /home/docker/cp-test_multinode-348000-m02_multinode-348000.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-348000 cp multinode-348000-m02:/home/docker/cp-test.txt                                                        | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:47 PDT | 19 Apr 24 18:47 PDT |
	|         | multinode-348000-m03:/home/docker/cp-test_multinode-348000-m02_multinode-348000-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-348000 ssh -n                                                                                                  | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:47 PDT | 19 Apr 24 18:47 PDT |
	|         | multinode-348000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-348000 ssh -n multinode-348000-m03 sudo cat                                                                    | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:48 PDT | 19 Apr 24 18:48 PDT |
	|         | /home/docker/cp-test_multinode-348000-m02_multinode-348000-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-348000 cp testdata\cp-test.txt                                                                                 | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:48 PDT | 19 Apr 24 18:48 PDT |
	|         | multinode-348000-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-348000 ssh -n                                                                                                  | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:48 PDT | 19 Apr 24 18:48 PDT |
	|         | multinode-348000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-348000 cp multinode-348000-m03:/home/docker/cp-test.txt                                                        | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:48 PDT | 19 Apr 24 18:48 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1378212137\001\cp-test_multinode-348000-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-348000 ssh -n                                                                                                  | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:48 PDT | 19 Apr 24 18:48 PDT |
	|         | multinode-348000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-348000 cp multinode-348000-m03:/home/docker/cp-test.txt                                                        | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:48 PDT | 19 Apr 24 18:49 PDT |
	|         | multinode-348000:/home/docker/cp-test_multinode-348000-m03_multinode-348000.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-348000 ssh -n                                                                                                  | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:49 PDT | 19 Apr 24 18:49 PDT |
	|         | multinode-348000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-348000 ssh -n multinode-348000 sudo cat                                                                        | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:49 PDT | 19 Apr 24 18:49 PDT |
	|         | /home/docker/cp-test_multinode-348000-m03_multinode-348000.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-348000 cp multinode-348000-m03:/home/docker/cp-test.txt                                                        | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:49 PDT | 19 Apr 24 18:49 PDT |
	|         | multinode-348000-m02:/home/docker/cp-test_multinode-348000-m03_multinode-348000-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-348000 ssh -n                                                                                                  | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:49 PDT | 19 Apr 24 18:49 PDT |
	|         | multinode-348000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-348000 ssh -n multinode-348000-m02 sudo cat                                                                    | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:49 PDT | 19 Apr 24 18:49 PDT |
	|         | /home/docker/cp-test_multinode-348000-m03_multinode-348000-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-348000 node stop m03                                                                                           | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:49 PDT | 19 Apr 24 18:50 PDT |
	| node    | multinode-348000 node start                                                                                              | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:51 PDT | 19 Apr 24 18:53 PDT |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-348000                                                                                                 | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:54 PDT |                     |
	| stop    | -p multinode-348000                                                                                                      | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:54 PDT | 19 Apr 24 18:55 PDT |
	| start   | -p multinode-348000                                                                                                      | multinode-348000 | minikube1\jenkins | v1.33.0 | 19 Apr 24 18:55 PDT |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 18:55:51
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 18:55:51.913151   14960 out.go:291] Setting OutFile to fd 948 ...
	I0419 18:55:51.913922   14960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 18:55:51.913922   14960 out.go:304] Setting ErrFile to fd 868...
	I0419 18:55:51.913922   14960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 18:55:51.980851   14960 out.go:298] Setting JSON to false
	I0419 18:55:51.989167   14960 start.go:129] hostinfo: {"hostname":"minikube1","uptime":16610,"bootTime":1713561541,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0419 18:55:51.989167   14960 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 18:55:52.117827   14960 out.go:177] * [multinode-348000] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0419 18:55:52.194388   14960 notify.go:220] Checking for updates...
	I0419 18:55:52.292331   14960 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 18:55:52.465492   14960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 18:55:52.559397   14960 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0419 18:55:52.632405   14960 out.go:177]   - MINIKUBE_LOCATION=18703
	I0419 18:55:52.885380   14960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 18:55:52.993344   14960 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 18:55:52.993641   14960 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 18:55:58.284530   14960 out.go:177] * Using the hyperv driver based on existing profile
	I0419 18:55:58.288651   14960 start.go:297] selected driver: hyperv
	I0419 18:55:58.288651   14960 start.go:901] validating driver "hyperv" against &{Name:multinode-348000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-348000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.42.231 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.32.249 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.37.59 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:fals
e istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 18:55:58.289069   14960 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 18:55:58.342162   14960 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 18:55:58.342162   14960 cni.go:84] Creating CNI manager for ""
	I0419 18:55:58.342162   14960 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0419 18:55:58.343716   14960 start.go:340] cluster config:
	{Name:multinode-348000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-348000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.42.231 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.32.249 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.37.59 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logview
er:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 18:55:58.343716   14960 iso.go:125] acquiring lock: {Name:mk297f2abb67cbbcd36490c866afe693892d0c05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 18:55:58.349258   14960 out.go:177] * Starting "multinode-348000" primary control-plane node in "multinode-348000" cluster
	I0419 18:55:58.385284   14960 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 18:55:58.385835   14960 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0419 18:55:58.385835   14960 cache.go:56] Caching tarball of preloaded images
	I0419 18:55:58.386359   14960 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0419 18:55:58.386751   14960 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 18:55:58.386751   14960 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\config.json ...
	I0419 18:55:58.389868   14960 start.go:360] acquireMachinesLock for multinode-348000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 18:55:58.389868   14960 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-348000"
	I0419 18:55:58.389868   14960 start.go:96] Skipping create...Using existing machine configuration
	I0419 18:55:58.390399   14960 fix.go:54] fixHost starting: 
	I0419 18:55:58.390558   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:01.011301   14960 main.go:141] libmachine: [stdout =====>] : Off
	
	I0419 18:56:01.011301   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:01.011424   14960 fix.go:112] recreateIfNeeded on multinode-348000: state=Stopped err=<nil>
	W0419 18:56:01.011424   14960 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 18:56:01.017995   14960 out.go:177] * Restarting existing hyperv VM for "multinode-348000" ...
	I0419 18:56:01.021435   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-348000
	I0419 18:56:03.976518   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:56:03.976695   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:03.976749   14960 main.go:141] libmachine: Waiting for host to start...
	I0419 18:56:03.976808   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:06.149898   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:06.149898   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:06.150144   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:08.600938   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:56:08.600938   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:09.609308   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:11.749167   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:11.749167   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:11.749658   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:14.261893   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:56:14.261893   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:15.265289   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:17.405348   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:17.405348   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:17.405486   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:19.898109   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:56:19.898803   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:20.904928   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:23.053093   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:23.053286   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:23.053410   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:25.550050   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:56:25.550237   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:26.564114   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:28.712224   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:28.712224   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:28.712347   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:31.265712   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:56:31.265712   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:31.269700   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:33.327571   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:33.328392   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:33.328451   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:35.852529   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:56:35.852529   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:35.852807   14960 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\config.json ...
	I0419 18:56:35.855995   14960 machine.go:94] provisionDockerMachine start ...
	I0419 18:56:35.856119   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:37.883473   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:37.884484   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:37.884716   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:40.391568   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:56:40.392030   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:40.399762   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 18:56:40.400677   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.24 22 <nil> <nil>}
	I0419 18:56:40.400677   14960 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 18:56:40.534878   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0419 18:56:40.534878   14960 buildroot.go:166] provisioning hostname "multinode-348000"
	I0419 18:56:40.535043   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:42.572882   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:42.572882   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:42.573256   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:45.056072   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:56:45.056120   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:45.063532   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 18:56:45.063532   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.24 22 <nil> <nil>}
	I0419 18:56:45.063532   14960 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-348000 && echo "multinode-348000" | sudo tee /etc/hostname
	I0419 18:56:45.237666   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-348000
	
	I0419 18:56:45.238364   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:47.296593   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:47.296593   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:47.297059   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:49.751556   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:56:49.751965   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:49.757436   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 18:56:49.758179   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.24 22 <nil> <nil>}
	I0419 18:56:49.758179   14960 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-348000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-348000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-348000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 18:56:49.915457   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 18:56:49.915566   14960 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0419 18:56:49.915566   14960 buildroot.go:174] setting up certificates
	I0419 18:56:49.915687   14960 provision.go:84] configureAuth start
	I0419 18:56:49.915687   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:51.995337   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:51.995337   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:51.996328   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:54.489945   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:56:54.489945   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:54.491020   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:56:56.568951   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:56:56.568951   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:56.569150   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:56:59.080141   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:56:59.080869   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:56:59.080928   14960 provision.go:143] copyHostCerts
	I0419 18:56:59.080928   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0419 18:56:59.080928   14960 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0419 18:56:59.080928   14960 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0419 18:56:59.081531   14960 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0419 18:56:59.083448   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0419 18:56:59.083600   14960 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0419 18:56:59.083600   14960 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0419 18:56:59.083600   14960 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0419 18:56:59.085211   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0419 18:56:59.085459   14960 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0419 18:56:59.085459   14960 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0419 18:56:59.085459   14960 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0419 18:56:59.086717   14960 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-348000 san=[127.0.0.1 172.19.42.24 localhost minikube multinode-348000]
	I0419 18:56:59.212497   14960 provision.go:177] copyRemoteCerts
	I0419 18:56:59.227899   14960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 18:56:59.227899   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:57:01.260402   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:57:01.260588   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:01.260718   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:57:03.765930   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:57:03.765984   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:03.766368   14960 sshutil.go:53] new ssh client: &{IP:172.19.42.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\id_rsa Username:docker}
	I0419 18:57:03.874864   14960 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6469554s)
	I0419 18:57:03.874945   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0419 18:57:03.875102   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0419 18:57:03.923262   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0419 18:57:03.923890   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0419 18:57:03.970966   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0419 18:57:03.970966   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 18:57:04.021035   14960 provision.go:87] duration metric: took 14.1053189s to configureAuth
	I0419 18:57:04.021174   14960 buildroot.go:189] setting minikube options for container-runtime
	I0419 18:57:04.021977   14960 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 18:57:04.022083   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:57:06.072836   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:57:06.072836   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:06.073215   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:57:08.599860   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:57:08.599860   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:08.608221   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 18:57:08.608356   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.24 22 <nil> <nil>}
	I0419 18:57:08.608974   14960 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0419 18:57:08.734094   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0419 18:57:08.734649   14960 buildroot.go:70] root file system type: tmpfs
	I0419 18:57:08.734839   14960 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0419 18:57:08.734921   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:57:10.819245   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:57:10.819245   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:10.819599   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:57:13.335038   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:57:13.335504   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:13.342105   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 18:57:13.342922   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.24 22 <nil> <nil>}
	I0419 18:57:13.342922   14960 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0419 18:57:13.516079   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0419 18:57:13.516079   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:57:15.577194   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:57:15.577312   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:15.577312   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:57:18.061518   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:57:18.061518   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:18.067921   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 18:57:18.069954   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.24 22 <nil> <nil>}
	I0419 18:57:18.069954   14960 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0419 18:57:20.654789   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0419 18:57:20.654870   14960 machine.go:97] duration metric: took 44.7987431s to provisionDockerMachine
	I0419 18:57:20.654950   14960 start.go:293] postStartSetup for "multinode-348000" (driver="hyperv")
	I0419 18:57:20.654986   14960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 18:57:20.669220   14960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 18:57:20.669220   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:57:22.756526   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:57:22.756526   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:22.756873   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:57:25.261333   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:57:25.261333   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:25.262619   14960 sshutil.go:53] new ssh client: &{IP:172.19.42.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\id_rsa Username:docker}
	I0419 18:57:25.367494   14960 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6981981s)
	I0419 18:57:25.381544   14960 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 18:57:25.387394   14960 command_runner.go:130] > NAME=Buildroot
	I0419 18:57:25.387394   14960 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0419 18:57:25.387394   14960 command_runner.go:130] > ID=buildroot
	I0419 18:57:25.387394   14960 command_runner.go:130] > VERSION_ID=2023.02.9
	I0419 18:57:25.387394   14960 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0419 18:57:25.387394   14960 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 18:57:25.387394   14960 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0419 18:57:25.388650   14960 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0419 18:57:25.389048   14960 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> 34162.pem in /etc/ssl/certs
	I0419 18:57:25.389048   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /etc/ssl/certs/34162.pem
	I0419 18:57:25.406031   14960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 18:57:25.425469   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /etc/ssl/certs/34162.pem (1708 bytes)
	I0419 18:57:25.474243   14960 start.go:296] duration metric: took 4.819247s for postStartSetup
	I0419 18:57:25.474572   14960 fix.go:56] duration metric: took 1m27.0839897s for fixHost
	I0419 18:57:25.474772   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:57:27.537697   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:57:27.537697   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:27.537697   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:57:30.056748   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:57:30.056748   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:30.066919   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 18:57:30.067612   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.24 22 <nil> <nil>}
	I0419 18:57:30.067612   14960 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 18:57:30.198144   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713578250.184697143
	
	I0419 18:57:30.198144   14960 fix.go:216] guest clock: 1713578250.184697143
	I0419 18:57:30.198144   14960 fix.go:229] Guest: 2024-04-19 18:57:30.184697143 -0700 PDT Remote: 2024-04-19 18:57:25.4746874 -0700 PDT m=+93.668371801 (delta=4.710009743s)
	I0419 18:57:30.198144   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:57:32.243202   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:57:32.243202   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:32.243428   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:57:34.758113   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:57:34.758113   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:34.766893   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 18:57:34.767071   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.42.24 22 <nil> <nil>}
	I0419 18:57:34.767071   14960 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713578250
	I0419 18:57:34.908225   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: Sat Apr 20 01:57:30 UTC 2024
	
	I0419 18:57:34.908225   14960 fix.go:236] clock set: Sat Apr 20 01:57:30 UTC 2024
	 (err=<nil>)
	I0419 18:57:34.908225   14960 start.go:83] releasing machines lock for "multinode-348000", held for 1m36.5181541s
	I0419 18:57:34.908225   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:57:36.964392   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:57:36.964490   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:36.964591   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:57:39.472701   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:57:39.473145   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:39.480354   14960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 18:57:39.480354   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:57:39.491117   14960 ssh_runner.go:195] Run: cat /version.json
	I0419 18:57:39.491117   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:57:41.650254   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:57:41.650536   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:41.650682   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:57:41.684939   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:57:41.684939   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:41.684939   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:57:44.317789   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:57:44.318368   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:44.318626   14960 sshutil.go:53] new ssh client: &{IP:172.19.42.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\id_rsa Username:docker}
	I0419 18:57:44.343621   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 18:57:44.343621   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:57:44.343621   14960 sshutil.go:53] new ssh client: &{IP:172.19.42.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\id_rsa Username:docker}
	I0419 18:57:44.425103   14960 command_runner.go:130] > {"iso_version": "v1.33.0", "kicbase_version": "v0.0.43-1713236840-18649", "minikube_version": "v1.33.0", "commit": "4bd203f0c710e7fdd30539846cf2bc6624a2556d"}
	I0419 18:57:44.425332   14960 ssh_runner.go:235] Completed: cat /version.json: (4.9342037s)
	I0419 18:57:44.439607   14960 ssh_runner.go:195] Run: systemctl --version
	I0419 18:57:44.504695   14960 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0419 18:57:44.504695   14960 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0243304s)
	I0419 18:57:44.504942   14960 command_runner.go:130] > systemd 252 (252)
	I0419 18:57:44.505043   14960 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0419 18:57:44.517125   14960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0419 18:57:44.529313   14960 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0419 18:57:44.530005   14960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 18:57:44.546276   14960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 18:57:44.578981   14960 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0419 18:57:44.579096   14960 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 18:57:44.579096   14960 start.go:494] detecting cgroup driver to use...
	I0419 18:57:44.579205   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 18:57:44.618210   14960 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0419 18:57:44.633185   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0419 18:57:44.670614   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0419 18:57:44.692361   14960 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0419 18:57:44.707651   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0419 18:57:44.740305   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 18:57:44.777779   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0419 18:57:44.812540   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 18:57:44.847553   14960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 18:57:44.884185   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0419 18:57:44.920990   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0419 18:57:44.956049   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0419 18:57:44.994494   14960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 18:57:45.013362   14960 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0419 18:57:45.028271   14960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 18:57:45.060779   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:57:45.273878   14960 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0419 18:57:45.313707   14960 start.go:494] detecting cgroup driver to use...
	I0419 18:57:45.328861   14960 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0419 18:57:45.356028   14960 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0419 18:57:45.356028   14960 command_runner.go:130] > [Unit]
	I0419 18:57:45.356028   14960 command_runner.go:130] > Description=Docker Application Container Engine
	I0419 18:57:45.356028   14960 command_runner.go:130] > Documentation=https://docs.docker.com
	I0419 18:57:45.356028   14960 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0419 18:57:45.356028   14960 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0419 18:57:45.356028   14960 command_runner.go:130] > StartLimitBurst=3
	I0419 18:57:45.356028   14960 command_runner.go:130] > StartLimitIntervalSec=60
	I0419 18:57:45.356028   14960 command_runner.go:130] > [Service]
	I0419 18:57:45.356028   14960 command_runner.go:130] > Type=notify
	I0419 18:57:45.356028   14960 command_runner.go:130] > Restart=on-failure
	I0419 18:57:45.356028   14960 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0419 18:57:45.356028   14960 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0419 18:57:45.356028   14960 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0419 18:57:45.356028   14960 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0419 18:57:45.356028   14960 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0419 18:57:45.356028   14960 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0419 18:57:45.356028   14960 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0419 18:57:45.356028   14960 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0419 18:57:45.356028   14960 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0419 18:57:45.356028   14960 command_runner.go:130] > ExecStart=
	I0419 18:57:45.356028   14960 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0419 18:57:45.356028   14960 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0419 18:57:45.356028   14960 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0419 18:57:45.356568   14960 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0419 18:57:45.356568   14960 command_runner.go:130] > LimitNOFILE=infinity
	I0419 18:57:45.356568   14960 command_runner.go:130] > LimitNPROC=infinity
	I0419 18:57:45.356617   14960 command_runner.go:130] > LimitCORE=infinity
	I0419 18:57:45.356617   14960 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0419 18:57:45.356617   14960 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0419 18:57:45.356617   14960 command_runner.go:130] > TasksMax=infinity
	I0419 18:57:45.356676   14960 command_runner.go:130] > TimeoutStartSec=0
	I0419 18:57:45.356676   14960 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0419 18:57:45.356718   14960 command_runner.go:130] > Delegate=yes
	I0419 18:57:45.356718   14960 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0419 18:57:45.356718   14960 command_runner.go:130] > KillMode=process
	I0419 18:57:45.356718   14960 command_runner.go:130] > [Install]
	I0419 18:57:45.356770   14960 command_runner.go:130] > WantedBy=multi-user.target
	I0419 18:57:45.370652   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 18:57:45.407895   14960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 18:57:45.461873   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 18:57:45.501637   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 18:57:45.544235   14960 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0419 18:57:45.617094   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 18:57:45.647270   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 18:57:45.681764   14960 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0419 18:57:45.696683   14960 ssh_runner.go:195] Run: which cri-dockerd
	I0419 18:57:45.702638   14960 command_runner.go:130] > /usr/bin/cri-dockerd
	I0419 18:57:45.717383   14960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0419 18:57:45.736623   14960 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0419 18:57:45.783753   14960 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0419 18:57:45.987748   14960 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0419 18:57:46.186538   14960 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0419 18:57:46.186538   14960 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0419 18:57:46.235226   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:57:46.452721   14960 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 18:57:49.103384   14960 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6506574s)
	I0419 18:57:49.117767   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0419 18:57:49.156025   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 18:57:49.193133   14960 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0419 18:57:49.391207   14960 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0419 18:57:49.601806   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:57:49.835578   14960 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0419 18:57:49.887214   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 18:57:49.925625   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:57:50.145208   14960 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0419 18:57:50.254781   14960 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0419 18:57:50.267794   14960 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0419 18:57:50.277781   14960 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0419 18:57:50.277781   14960 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0419 18:57:50.277781   14960 command_runner.go:130] > Device: 0,22	Inode: 852         Links: 1
	I0419 18:57:50.277781   14960 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0419 18:57:50.277781   14960 command_runner.go:130] > Access: 2024-04-20 01:57:50.164058530 +0000
	I0419 18:57:50.277781   14960 command_runner.go:130] > Modify: 2024-04-20 01:57:50.164058530 +0000
	I0419 18:57:50.277781   14960 command_runner.go:130] > Change: 2024-04-20 01:57:50.168058647 +0000
	I0419 18:57:50.277781   14960 command_runner.go:130] >  Birth: -
	I0419 18:57:50.277781   14960 start.go:562] Will wait 60s for crictl version
	I0419 18:57:50.293143   14960 ssh_runner.go:195] Run: which crictl
	I0419 18:57:50.299154   14960 command_runner.go:130] > /usr/bin/crictl
	I0419 18:57:50.317417   14960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 18:57:50.381375   14960 command_runner.go:130] > Version:  0.1.0
	I0419 18:57:50.381375   14960 command_runner.go:130] > RuntimeName:  docker
	I0419 18:57:50.381375   14960 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0419 18:57:50.381375   14960 command_runner.go:130] > RuntimeApiVersion:  v1
	I0419 18:57:50.381375   14960 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0419 18:57:50.391146   14960 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 18:57:50.422989   14960 command_runner.go:130] > 26.0.1
	I0419 18:57:50.433014   14960 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 18:57:50.463601   14960 command_runner.go:130] > 26.0.1
	I0419 18:57:50.468601   14960 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0419 18:57:50.468601   14960 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0419 18:57:50.470600   14960 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0419 18:57:50.470600   14960 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0419 18:57:50.470600   14960 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0419 18:57:50.470600   14960 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8c:b9:25 Flags:up|broadcast|multicast|running}
	I0419 18:57:50.478120   14960 ip.go:210] interface addr: fe80::ce04:318e:a1d8:4460/64
	I0419 18:57:50.478120   14960 ip.go:210] interface addr: 172.19.32.1/20
	I0419 18:57:50.492559   14960 ssh_runner.go:195] Run: grep 172.19.32.1	host.minikube.internal$ /etc/hosts
	I0419 18:57:50.499203   14960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.32.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 18:57:50.521465   14960 kubeadm.go:877] updating cluster {Name:multinode-348000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-348000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.42.24 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.32.249 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.37.59 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 18:57:50.521857   14960 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 18:57:50.531639   14960 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0419 18:57:50.555575   14960 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0419 18:57:50.555575   14960 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0419 18:57:50.555575   14960 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0419 18:57:50.555575   14960 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0419 18:57:50.555575   14960 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0419 18:57:50.555575   14960 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0419 18:57:50.555575   14960 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0419 18:57:50.555575   14960 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0419 18:57:50.555575   14960 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 18:57:50.555575   14960 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0419 18:57:50.556622   14960 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0419 18:57:50.556622   14960 docker.go:615] Images already preloaded, skipping extraction
	I0419 18:57:50.565566   14960 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0419 18:57:50.588348   14960 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0419 18:57:50.588348   14960 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0419 18:57:50.588348   14960 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0419 18:57:50.588348   14960 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0419 18:57:50.588348   14960 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0419 18:57:50.588348   14960 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0419 18:57:50.588348   14960 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0419 18:57:50.588348   14960 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0419 18:57:50.588348   14960 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 18:57:50.588348   14960 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0419 18:57:50.589571   14960 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0419 18:57:50.589571   14960 cache_images.go:84] Images are preloaded, skipping loading
	I0419 18:57:50.589571   14960 kubeadm.go:928] updating node { 172.19.42.24 8443 v1.30.0 docker true true} ...
	I0419 18:57:50.589571   14960 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-348000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.42.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-348000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 18:57:50.598565   14960 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0419 18:57:50.635570   14960 command_runner.go:130] > cgroupfs
	I0419 18:57:50.635839   14960 cni.go:84] Creating CNI manager for ""
	I0419 18:57:50.635891   14960 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0419 18:57:50.635891   14960 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 18:57:50.635976   14960 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.42.24 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-348000 NodeName:multinode-348000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.42.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.42.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 18:57:50.636139   14960 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.42.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-348000"
	  kubeletExtraArgs:
	    node-ip: 172.19.42.24
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.42.24"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 18:57:50.648288   14960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 18:57:50.668178   14960 command_runner.go:130] > kubeadm
	I0419 18:57:50.668178   14960 command_runner.go:130] > kubectl
	I0419 18:57:50.668178   14960 command_runner.go:130] > kubelet
	I0419 18:57:50.668178   14960 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 18:57:50.680597   14960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0419 18:57:50.704763   14960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0419 18:57:50.734984   14960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 18:57:50.763652   14960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0419 18:57:50.818971   14960 ssh_runner.go:195] Run: grep 172.19.42.24	control-plane.minikube.internal$ /etc/hosts
	I0419 18:57:50.826259   14960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.42.24	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 18:57:50.863179   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:57:51.072135   14960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 18:57:51.104396   14960 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000 for IP: 172.19.42.24
	I0419 18:57:51.104396   14960 certs.go:194] generating shared ca certs ...
	I0419 18:57:51.104396   14960 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:57:51.105376   14960 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0419 18:57:51.105730   14960 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0419 18:57:51.105855   14960 certs.go:256] generating profile certs ...
	I0419 18:57:51.106832   14960 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\client.key
	I0419 18:57:51.107062   14960 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key.ea55f2d0
	I0419 18:57:51.107237   14960 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt.ea55f2d0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.42.24]
	I0419 18:57:51.254334   14960 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt.ea55f2d0 ...
	I0419 18:57:51.254334   14960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt.ea55f2d0: {Name:mk1834bcf316826ce45dc2ecf9fee6874a5df74d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:57:51.255870   14960 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key.ea55f2d0 ...
	I0419 18:57:51.255870   14960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key.ea55f2d0: {Name:mkf1eabdf644d4b38289b725707f4624e6455a39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:57:51.256924   14960 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt.ea55f2d0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt
	I0419 18:57:51.269731   14960 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key.ea55f2d0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key
	I0419 18:57:51.271801   14960 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\proxy-client.key
	I0419 18:57:51.271801   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 18:57:51.271801   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0419 18:57:51.271801   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 18:57:51.272469   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 18:57:51.272469   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0419 18:57:51.272469   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0419 18:57:51.273093   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0419 18:57:51.273093   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0419 18:57:51.274149   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem (1338 bytes)
	W0419 18:57:51.274667   14960 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416_empty.pem, impossibly tiny 0 bytes
	I0419 18:57:51.274777   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0419 18:57:51.275143   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0419 18:57:51.275411   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0419 18:57:51.275729   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0419 18:57:51.276217   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem (1708 bytes)
	I0419 18:57:51.276518   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem -> /usr/share/ca-certificates/3416.pem
	I0419 18:57:51.276613   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /usr/share/ca-certificates/34162.pem
	I0419 18:57:51.276796   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 18:57:51.278155   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 18:57:51.336844   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 18:57:51.394440   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 18:57:51.447866   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 18:57:51.503720   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0419 18:57:51.554962   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0419 18:57:51.612448   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 18:57:51.662850   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0419 18:57:51.712338   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem --> /usr/share/ca-certificates/3416.pem (1338 bytes)
	I0419 18:57:51.758478   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /usr/share/ca-certificates/34162.pem (1708 bytes)
	I0419 18:57:51.803754   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 18:57:51.849453   14960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0419 18:57:51.897500   14960 ssh_runner.go:195] Run: openssl version
	I0419 18:57:51.905991   14960 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0419 18:57:51.923131   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34162.pem && ln -fs /usr/share/ca-certificates/34162.pem /etc/ssl/certs/34162.pem"
	I0419 18:57:51.963074   14960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34162.pem
	I0419 18:57:51.970371   14960 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 20 00:10 /usr/share/ca-certificates/34162.pem
	I0419 18:57:51.970510   14960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:10 /usr/share/ca-certificates/34162.pem
	I0419 18:57:51.983605   14960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34162.pem
	I0419 18:57:51.992670   14960 command_runner.go:130] > 3ec20f2e
	I0419 18:57:52.007291   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34162.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 18:57:52.049132   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 18:57:52.088494   14960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 18:57:52.098338   14960 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 20 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I0419 18:57:52.098338   14960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 20 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I0419 18:57:52.112147   14960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 18:57:52.125104   14960 command_runner.go:130] > b5213941
	I0419 18:57:52.136377   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 18:57:52.174791   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3416.pem && ln -fs /usr/share/ca-certificates/3416.pem /etc/ssl/certs/3416.pem"
	I0419 18:57:52.207601   14960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3416.pem
	I0419 18:57:52.216690   14960 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 20 00:10 /usr/share/ca-certificates/3416.pem
	I0419 18:57:52.217293   14960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:10 /usr/share/ca-certificates/3416.pem
	I0419 18:57:52.231705   14960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3416.pem
	I0419 18:57:52.241774   14960 command_runner.go:130] > 51391683
	I0419 18:57:52.257361   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3416.pem /etc/ssl/certs/51391683.0"
	I0419 18:57:52.292553   14960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 18:57:52.301612   14960 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 18:57:52.301684   14960 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0419 18:57:52.301684   14960 command_runner.go:130] > Device: 8,1	Inode: 6290258     Links: 1
	I0419 18:57:52.301720   14960 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0419 18:57:52.301720   14960 command_runner.go:130] > Access: 2024-04-20 01:34:55.187593889 +0000
	I0419 18:57:52.301720   14960 command_runner.go:130] > Modify: 2024-04-20 01:34:55.187593889 +0000
	I0419 18:57:52.301720   14960 command_runner.go:130] > Change: 2024-04-20 01:34:55.187593889 +0000
	I0419 18:57:52.301720   14960 command_runner.go:130] >  Birth: 2024-04-20 01:34:55.187593889 +0000
	I0419 18:57:52.320813   14960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0419 18:57:52.330176   14960 command_runner.go:130] > Certificate will not expire
	I0419 18:57:52.343950   14960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0419 18:57:52.354776   14960 command_runner.go:130] > Certificate will not expire
	I0419 18:57:52.366524   14960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0419 18:57:52.377492   14960 command_runner.go:130] > Certificate will not expire
	I0419 18:57:52.389434   14960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0419 18:57:52.399840   14960 command_runner.go:130] > Certificate will not expire
	I0419 18:57:52.413630   14960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0419 18:57:52.423042   14960 command_runner.go:130] > Certificate will not expire
	I0419 18:57:52.436501   14960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0419 18:57:52.449531   14960 command_runner.go:130] > Certificate will not expire
	I0419 18:57:52.450073   14960 kubeadm.go:391] StartCluster: {Name:multinode-348000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-348000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.42.24 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.32.249 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.37.59 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisi
oner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 18:57:52.459380   14960 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0419 18:57:52.497238   14960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0419 18:57:52.518900   14960 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0419 18:57:52.519063   14960 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0419 18:57:52.519063   14960 command_runner.go:130] > /var/lib/minikube/etcd:
	I0419 18:57:52.519063   14960 command_runner.go:130] > member
	W0419 18:57:52.519063   14960 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0419 18:57:52.519063   14960 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0419 18:57:52.519188   14960 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0419 18:57:52.532583   14960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0419 18:57:52.549427   14960 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0419 18:57:52.551533   14960 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-348000" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 18:57:52.552010   14960 kubeconfig.go:62] C:\Users\jenkins.minikube1\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-348000" cluster setting kubeconfig missing "multinode-348000" context setting]
	I0419 18:57:52.552747   14960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:57:52.573432   14960 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 18:57:52.574110   14960 kapi.go:59] client config for multinode-348000: &rest.Config{Host:"https://172.19.42.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-348000/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-348000/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c35620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 18:57:52.575842   14960 cert_rotation.go:137] Starting client certificate rotation controller
	I0419 18:57:52.588431   14960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0419 18:57:52.608371   14960 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0419 18:57:52.608835   14960 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0419 18:57:52.608869   14960 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0419 18:57:52.608869   14960 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0419 18:57:52.608869   14960 command_runner.go:130] >  kind: InitConfiguration
	I0419 18:57:52.608869   14960 command_runner.go:130] >  localAPIEndpoint:
	I0419 18:57:52.608869   14960 command_runner.go:130] > -  advertiseAddress: 172.19.42.231
	I0419 18:57:52.608869   14960 command_runner.go:130] > +  advertiseAddress: 172.19.42.24
	I0419 18:57:52.608869   14960 command_runner.go:130] >    bindPort: 8443
	I0419 18:57:52.608869   14960 command_runner.go:130] >  bootstrapTokens:
	I0419 18:57:52.608869   14960 command_runner.go:130] >    - groups:
	I0419 18:57:52.608869   14960 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0419 18:57:52.608869   14960 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0419 18:57:52.608869   14960 command_runner.go:130] >    name: "multinode-348000"
	I0419 18:57:52.608869   14960 command_runner.go:130] >    kubeletExtraArgs:
	I0419 18:57:52.608988   14960 command_runner.go:130] > -    node-ip: 172.19.42.231
	I0419 18:57:52.608988   14960 command_runner.go:130] > +    node-ip: 172.19.42.24
	I0419 18:57:52.608988   14960 command_runner.go:130] >    taints: []
	I0419 18:57:52.608988   14960 command_runner.go:130] >  ---
	I0419 18:57:52.608988   14960 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0419 18:57:52.609030   14960 command_runner.go:130] >  kind: ClusterConfiguration
	I0419 18:57:52.609030   14960 command_runner.go:130] >  apiServer:
	I0419 18:57:52.609030   14960 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.19.42.231"]
	I0419 18:57:52.609058   14960 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.19.42.24"]
	I0419 18:57:52.609058   14960 command_runner.go:130] >    extraArgs:
	I0419 18:57:52.609058   14960 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0419 18:57:52.609058   14960 command_runner.go:130] >  controllerManager:
	I0419 18:57:52.609058   14960 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.19.42.231
	+  advertiseAddress: 172.19.42.24
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-348000"
	   kubeletExtraArgs:
	-    node-ip: 172.19.42.231
	+    node-ip: 172.19.42.24
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.19.42.231"]
	+  certSANs: ["127.0.0.1", "localhost", "172.19.42.24"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0419 18:57:52.609058   14960 kubeadm.go:1154] stopping kube-system containers ...
	I0419 18:57:52.620488   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0419 18:57:52.648237   14960 command_runner.go:130] > 627b84abf45c
	I0419 18:57:52.648237   14960 command_runner.go:130] > e248c230a4aa
	I0419 18:57:52.648237   14960 command_runner.go:130] > da1d06ec238f
	I0419 18:57:52.648724   14960 command_runner.go:130] > 2dd294415aae
	I0419 18:57:52.648724   14960 command_runner.go:130] > 8a37c65d06fa
	I0419 18:57:52.648724   14960 command_runner.go:130] > a6586791413d
	I0419 18:57:52.648724   14960 command_runner.go:130] > 7935893e9f22
	I0419 18:57:52.648794   14960 command_runner.go:130] > dd9e5fae3950
	I0419 18:57:52.648794   14960 command_runner.go:130] > 9638ddcd5428
	I0419 18:57:52.648794   14960 command_runner.go:130] > 53f6a0049076
	I0419 18:57:52.648898   14960 command_runner.go:130] > 490377504e57
	I0419 18:57:52.648898   14960 command_runner.go:130] > e476774b8f77
	I0419 18:57:52.648898   14960 command_runner.go:130] > 187cb57784f4
	I0419 18:57:52.649016   14960 command_runner.go:130] > 00d48e11227e
	I0419 18:57:52.649016   14960 command_runner.go:130] > 6e420625b84b
	I0419 18:57:52.649081   14960 command_runner.go:130] > e5d733991bf1
	I0419 18:57:52.649915   14960 docker.go:483] Stopping containers: [627b84abf45c e248c230a4aa da1d06ec238f 2dd294415aae 8a37c65d06fa a6586791413d 7935893e9f22 dd9e5fae3950 9638ddcd5428 53f6a0049076 490377504e57 e476774b8f77 187cb57784f4 00d48e11227e 6e420625b84b e5d733991bf1]
	I0419 18:57:52.661411   14960 ssh_runner.go:195] Run: docker stop 627b84abf45c e248c230a4aa da1d06ec238f 2dd294415aae 8a37c65d06fa a6586791413d 7935893e9f22 dd9e5fae3950 9638ddcd5428 53f6a0049076 490377504e57 e476774b8f77 187cb57784f4 00d48e11227e 6e420625b84b e5d733991bf1
	I0419 18:57:52.690386   14960 command_runner.go:130] > 627b84abf45c
	I0419 18:57:52.690386   14960 command_runner.go:130] > e248c230a4aa
	I0419 18:57:52.690531   14960 command_runner.go:130] > da1d06ec238f
	I0419 18:57:52.690531   14960 command_runner.go:130] > 2dd294415aae
	I0419 18:57:52.690531   14960 command_runner.go:130] > 8a37c65d06fa
	I0419 18:57:52.690531   14960 command_runner.go:130] > a6586791413d
	I0419 18:57:52.690531   14960 command_runner.go:130] > 7935893e9f22
	I0419 18:57:52.690531   14960 command_runner.go:130] > dd9e5fae3950
	I0419 18:57:52.690531   14960 command_runner.go:130] > 9638ddcd5428
	I0419 18:57:52.690531   14960 command_runner.go:130] > 53f6a0049076
	I0419 18:57:52.690531   14960 command_runner.go:130] > 490377504e57
	I0419 18:57:52.690531   14960 command_runner.go:130] > e476774b8f77
	I0419 18:57:52.690531   14960 command_runner.go:130] > 187cb57784f4
	I0419 18:57:52.690531   14960 command_runner.go:130] > 00d48e11227e
	I0419 18:57:52.690682   14960 command_runner.go:130] > 6e420625b84b
	I0419 18:57:52.690682   14960 command_runner.go:130] > e5d733991bf1
	I0419 18:57:52.704529   14960 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0419 18:57:52.744496   14960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0419 18:57:52.761994   14960 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0419 18:57:52.762569   14960 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0419 18:57:52.762610   14960 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0419 18:57:52.762610   14960 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 18:57:52.762610   14960 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 18:57:52.762610   14960 kubeadm.go:156] found existing configuration files:
	
	I0419 18:57:52.774154   14960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0419 18:57:52.795097   14960 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 18:57:52.795582   14960 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 18:57:52.809117   14960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0419 18:57:52.839883   14960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0419 18:57:52.857275   14960 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 18:57:52.857642   14960 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 18:57:52.870050   14960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0419 18:57:52.900981   14960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0419 18:57:52.918355   14960 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 18:57:52.918486   14960 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 18:57:52.935043   14960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0419 18:57:52.966152   14960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0419 18:57:52.983924   14960 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 18:57:52.984883   14960 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 18:57:52.999206   14960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0419 18:57:53.033097   14960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0419 18:57:53.057364   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 18:57:53.382718   14960 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0419 18:57:53.382790   14960 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0419 18:57:53.382790   14960 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0419 18:57:53.382848   14960 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0419 18:57:53.382848   14960 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0419 18:57:53.382848   14960 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0419 18:57:53.382885   14960 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0419 18:57:53.382885   14960 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0419 18:57:53.382885   14960 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0419 18:57:53.382885   14960 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0419 18:57:53.382885   14960 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0419 18:57:53.382885   14960 command_runner.go:130] > [certs] Using the existing "sa" key
	I0419 18:57:53.382962   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 18:57:54.536252   14960 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0419 18:57:54.536252   14960 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0419 18:57:54.536252   14960 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0419 18:57:54.536252   14960 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0419 18:57:54.536252   14960 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0419 18:57:54.536252   14960 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0419 18:57:54.536252   14960 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1532878s)
	I0419 18:57:54.536252   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0419 18:57:54.847668   14960 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 18:57:54.847668   14960 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 18:57:54.847668   14960 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0419 18:57:54.847668   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 18:57:54.957881   14960 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0419 18:57:54.957881   14960 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0419 18:57:54.957881   14960 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0419 18:57:54.957881   14960 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0419 18:57:54.957881   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0419 18:57:55.071564   14960 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0419 18:57:55.071719   14960 api_server.go:52] waiting for apiserver process to appear ...
	I0419 18:57:55.089546   14960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 18:57:55.593708   14960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 18:57:56.094223   14960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 18:57:56.596301   14960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 18:57:57.088270   14960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 18:57:57.114657   14960 command_runner.go:130] > 1877
	I0419 18:57:57.114657   14960 api_server.go:72] duration metric: took 2.0430155s to wait for apiserver process to appear ...
	I0419 18:57:57.114657   14960 api_server.go:88] waiting for apiserver healthz status ...
	I0419 18:57:57.114657   14960 api_server.go:253] Checking apiserver healthz at https://172.19.42.24:8443/healthz ...
	I0419 18:58:00.658967   14960 api_server.go:279] https://172.19.42.24:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0419 18:58:00.659264   14960 api_server.go:103] status: https://172.19.42.24:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0419 18:58:00.659264   14960 api_server.go:253] Checking apiserver healthz at https://172.19.42.24:8443/healthz ...
	I0419 18:58:00.752443   14960 api_server.go:279] https://172.19.42.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0419 18:58:00.753143   14960 api_server.go:103] status: https://172.19.42.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0419 18:58:01.128754   14960 api_server.go:253] Checking apiserver healthz at https://172.19.42.24:8443/healthz ...
	I0419 18:58:01.137618   14960 api_server.go:279] https://172.19.42.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0419 18:58:01.137618   14960 api_server.go:103] status: https://172.19.42.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0419 18:58:01.616585   14960 api_server.go:253] Checking apiserver healthz at https://172.19.42.24:8443/healthz ...
	I0419 18:58:01.629910   14960 api_server.go:279] https://172.19.42.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0419 18:58:01.629910   14960 api_server.go:103] status: https://172.19.42.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0419 18:58:02.122150   14960 api_server.go:253] Checking apiserver healthz at https://172.19.42.24:8443/healthz ...
	I0419 18:58:02.128537   14960 api_server.go:279] https://172.19.42.24:8443/healthz returned 200:
	ok
	I0419 18:58:02.129819   14960 round_trippers.go:463] GET https://172.19.42.24:8443/version
	I0419 18:58:02.129819   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:02.129907   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:02.129907   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:02.143374   14960 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0419 18:58:02.143374   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:02.143374   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:02.143374   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:02.143374   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:02.143374   14960 round_trippers.go:580]     Content-Length: 263
	I0419 18:58:02.143374   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:02 GMT
	I0419 18:58:02.143374   14960 round_trippers.go:580]     Audit-Id: 3f375a0a-26a4-44b4-aeca-761f67cd0ec1
	I0419 18:58:02.143374   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:02.143374   14960 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0419 18:58:02.143374   14960 api_server.go:141] control plane version: v1.30.0
	I0419 18:58:02.143374   14960 api_server.go:131] duration metric: took 5.0287063s to wait for apiserver health ...
	I0419 18:58:02.143374   14960 cni.go:84] Creating CNI manager for ""
	I0419 18:58:02.143374   14960 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0419 18:58:02.147369   14960 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0419 18:58:02.164373   14960 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0419 18:58:02.173364   14960 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0419 18:58:02.173432   14960 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0419 18:58:02.173432   14960 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0419 18:58:02.173432   14960 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0419 18:58:02.173493   14960 command_runner.go:130] > Access: 2024-04-20 01:56:28.980814400 +0000
	I0419 18:58:02.173493   14960 command_runner.go:130] > Modify: 2024-04-18 23:25:47.000000000 +0000
	I0419 18:58:02.173526   14960 command_runner.go:130] > Change: 2024-04-20 01:56:17.849000000 +0000
	I0419 18:58:02.173526   14960 command_runner.go:130] >  Birth: -
	I0419 18:58:02.173646   14960 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0419 18:58:02.173683   14960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0419 18:58:02.274816   14960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0419 18:58:03.445994   14960 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0419 18:58:03.446103   14960 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0419 18:58:03.446103   14960 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0419 18:58:03.446163   14960 command_runner.go:130] > daemonset.apps/kindnet configured
	I0419 18:58:03.446163   14960 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.1713448s)
	I0419 18:58:03.446243   14960 system_pods.go:43] waiting for kube-system pods to appear ...
	I0419 18:58:03.446490   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods
	I0419 18:58:03.446490   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.446490   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.446490   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:03.453080   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:58:03.453080   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:03.453080   14960 round_trippers.go:580]     Audit-Id: b5b53c7d-498a-46b7-9bac-9dd8e14fb35a
	I0419 18:58:03.453080   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:03.453080   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:03.453080   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:03.453080   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:03.453080   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:03.454063   14960 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1748"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87662 chars]
	I0419 18:58:03.461090   14960 system_pods.go:59] 12 kube-system pods found
	I0419 18:58:03.461090   14960 system_pods.go:61] "coredns-7db6d8ff4d-7w477" [895ddde9-466d-4abf-b6f4-594847b26c6c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0419 18:58:03.461090   14960 system_pods.go:61] "etcd-multinode-348000" [33702588-cdf3-4577-b18d-18415cca2c25] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0419 18:58:03.461090   14960 system_pods.go:61] "kindnet-mg8qs" [c6e448a2-6f0c-4c7f-aa8b-0d585c84b09e] Running
	I0419 18:58:03.461090   14960 system_pods.go:61] "kindnet-s4fsr" [46c91d5e-edfa-4254-a802-148047caeab5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0419 18:58:03.461090   14960 system_pods.go:61] "kindnet-s98rh" [551f5bde-7c56-4023-ad92-a2d7a122da60] Running
	I0419 18:58:03.461090   14960 system_pods.go:61] "kube-apiserver-multinode-348000" [13adbf1b-6c17-47a9-951d-2481680a47bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0419 18:58:03.461090   14960 system_pods.go:61] "kube-controller-manager-multinode-348000" [299bb088-9795-4452-87a8-5e96bcacedde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0419 18:58:03.461090   14960 system_pods.go:61] "kube-proxy-2jjsq" [f9666ab7-0d1f-4800-b979-6e38fecdc518] Running
	I0419 18:58:03.461090   14960 system_pods.go:61] "kube-proxy-bjv9b" [3e909d14-543a-4734-8c17-7e2b8188553d] Running
	I0419 18:58:03.461090   14960 system_pods.go:61] "kube-proxy-kj76x" [274342c4-c21f-4279-b0ea-743d8e2c1463] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0419 18:58:03.461090   14960 system_pods.go:61] "kube-scheduler-multinode-348000" [000cfafe-a513-4738-9de2-3c25244b72be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0419 18:58:03.461090   14960 system_pods.go:61] "storage-provisioner" [ffa0cfb9-91fb-4d5b-abe7-11992c731b74] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0419 18:58:03.461090   14960 system_pods.go:74] duration metric: took 14.7858ms to wait for pod list to return data ...
	I0419 18:58:03.461090   14960 node_conditions.go:102] verifying NodePressure condition ...
	I0419 18:58:03.461090   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes
	I0419 18:58:03.461090   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.461090   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.461090   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:03.465072   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:03.466031   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:03.466031   14960 round_trippers.go:580]     Audit-Id: b637c23b-59da-459f-8966-62b69ec7f601
	I0419 18:58:03.466082   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:03.466082   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:03.466082   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:03.466082   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:03.466082   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:03.466150   14960 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1748"},"items":[{"metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15626 chars]
	I0419 18:58:03.467491   14960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 18:58:03.467491   14960 node_conditions.go:123] node cpu capacity is 2
	I0419 18:58:03.467491   14960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 18:58:03.467491   14960 node_conditions.go:123] node cpu capacity is 2
	I0419 18:58:03.467491   14960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 18:58:03.467491   14960 node_conditions.go:123] node cpu capacity is 2
	I0419 18:58:03.467491   14960 node_conditions.go:105] duration metric: took 6.4009ms to run NodePressure ...
	I0419 18:58:03.467491   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 18:58:03.944715   14960 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0419 18:58:03.944715   14960 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0419 18:58:03.944861   14960 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0419 18:58:03.945019   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0419 18:58:03.945019   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.945096   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.945096   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:03.951732   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:58:03.951858   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:03.951873   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:03.951873   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:03.951873   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:03.951914   14960 round_trippers.go:580]     Audit-Id: 75cb39a3-db37-4085-a28e-83bda547f8d7
	I0419 18:58:03.951914   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:03.951942   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:03.953513   14960 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1753"},"items":[{"metadata":{"name":"etcd-multinode-348000","namespace":"kube-system","uid":"33702588-cdf3-4577-b18d-18415cca2c25","resourceVersion":"1741","creationTimestamp":"2024-04-20T01:58:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.42.24:2379","kubernetes.io/config.hash":"c0cfa3da6a3913c3e67500f6c3e9d72b","kubernetes.io/config.mirror":"c0cfa3da6a3913c3e67500f6c3e9d72b","kubernetes.io/config.seen":"2024-04-20T01:57:55.099346749Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:58:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 30501 chars]
	I0419 18:58:03.954964   14960 kubeadm.go:733] kubelet initialised
	I0419 18:58:03.954964   14960 kubeadm.go:734] duration metric: took 10.1029ms waiting for restarted kubelet to initialise ...
	I0419 18:58:03.954964   14960 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 18:58:03.955514   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods
	I0419 18:58:03.955514   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.955514   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.955514   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:03.961575   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:58:03.961575   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:03.961575   14960 round_trippers.go:580]     Audit-Id: 56d2709a-6472-464e-83d5-a0ab21fac066
	I0419 18:58:03.961575   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:03.961575   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:03.961575   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:03.961575   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:03.961575   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:03.963572   14960 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1753"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87069 chars]
	I0419 18:58:03.966569   14960 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace to be "Ready" ...
	I0419 18:58:03.967572   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:03.967572   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.967572   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.967572   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:03.970583   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:03.970583   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:03.970583   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:03.970583   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:03.970583   14960 round_trippers.go:580]     Audit-Id: 1dcea75f-fec2-4370-9489-9dddfc1fe8b8
	I0419 18:58:03.970583   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:03.971452   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:03.971452   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:03.971678   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:03.972286   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:03.972349   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.972349   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.972349   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:03.977439   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:03.977439   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:03.977439   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:03.977439   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:03.977439   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:03.977439   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:03.977439   14960 round_trippers.go:580]     Audit-Id: 9ef7cb1b-5484-4d97-b1e4-3dbaeb285a9d
	I0419 18:58:03.977439   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:03.977981   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:03.978175   14960 pod_ready.go:97] node "multinode-348000" hosting pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:03.978175   14960 pod_ready.go:81] duration metric: took 11.6057ms for pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace to be "Ready" ...
	E0419 18:58:03.978175   14960 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348000" hosting pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:03.978175   14960 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:58:03.978175   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-348000
	I0419 18:58:03.978175   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.978175   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:03.978175   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.981953   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:03.982067   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:03.982067   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:03.982067   14960 round_trippers.go:580]     Audit-Id: 80e6f705-7776-439f-9862-5c10226d579d
	I0419 18:58:03.982113   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:03.982113   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:03.982113   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:03.982180   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:03.982370   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-348000","namespace":"kube-system","uid":"33702588-cdf3-4577-b18d-18415cca2c25","resourceVersion":"1741","creationTimestamp":"2024-04-20T01:58:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.42.24:2379","kubernetes.io/config.hash":"c0cfa3da6a3913c3e67500f6c3e9d72b","kubernetes.io/config.mirror":"c0cfa3da6a3913c3e67500f6c3e9d72b","kubernetes.io/config.seen":"2024-04-20T01:57:55.099346749Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:58:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6373 chars]
	I0419 18:58:03.982920   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:03.982920   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.982920   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.982981   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:03.988211   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:03.988211   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:03.988211   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:03.988211   14960 round_trippers.go:580]     Audit-Id: 01f3ca9f-fb59-4d02-84a9-a84e531b5cb4
	I0419 18:58:03.988211   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:03.988211   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:03.988211   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:03.988211   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:03.988211   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:03.989166   14960 pod_ready.go:97] node "multinode-348000" hosting pod "etcd-multinode-348000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:03.989166   14960 pod_ready.go:81] duration metric: took 10.9918ms for pod "etcd-multinode-348000" in "kube-system" namespace to be "Ready" ...
	E0419 18:58:03.989166   14960 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348000" hosting pod "etcd-multinode-348000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:03.989166   14960 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:58:03.989166   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-348000
	I0419 18:58:03.989166   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.989166   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.989166   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:03.992193   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:03.992193   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:03.992193   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:03.992193   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:03.992193   14960 round_trippers.go:580]     Audit-Id: 446ad663-8de3-472b-9060-e16ad714a213
	I0419 18:58:03.992193   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:03.992193   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:03.992193   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:03.993174   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-348000","namespace":"kube-system","uid":"13adbf1b-6c17-47a9-951d-2481680a47bd","resourceVersion":"1739","creationTimestamp":"2024-04-20T01:58:01Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.42.24:8443","kubernetes.io/config.hash":"af7a3c9321ace7e2a933260472b90113","kubernetes.io/config.mirror":"af7a3c9321ace7e2a933260472b90113","kubernetes.io/config.seen":"2024-04-20T01:57:55.026086199Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:58:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7929 chars]
	I0419 18:58:03.993174   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:03.993174   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.993174   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.993174   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:03.996177   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:03.996177   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:03.996177   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:03.996177   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:03.996177   14960 round_trippers.go:580]     Audit-Id: 57f68f78-761f-44e0-9b69-55a5c52e7e07
	I0419 18:58:03.996177   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:03.996177   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:03.996177   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:03.997177   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:03.997177   14960 pod_ready.go:97] node "multinode-348000" hosting pod "kube-apiserver-multinode-348000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:03.997177   14960 pod_ready.go:81] duration metric: took 8.0103ms for pod "kube-apiserver-multinode-348000" in "kube-system" namespace to be "Ready" ...
	E0419 18:58:03.997177   14960 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348000" hosting pod "kube-apiserver-multinode-348000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:03.997177   14960 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:58:03.997177   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-348000
	I0419 18:58:03.997177   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:03.997177   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:03.997177   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:04.000182   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:04.000182   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:04.000182   14960 round_trippers.go:580]     Audit-Id: 4a6cf02f-7c9b-480a-a20e-aa1f822c2655
	I0419 18:58:04.000182   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:04.000182   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:04.000182   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:04.000182   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:04.000182   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:04.001183   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-348000","namespace":"kube-system","uid":"299bb088-9795-4452-87a8-5e96bcacedde","resourceVersion":"1738","creationTimestamp":"2024-04-20T01:35:08Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"30aa2729d0c65b9f89e1ae2d151edd9b","kubernetes.io/config.mirror":"30aa2729d0c65b9f89e1ae2d151edd9b","kubernetes.io/config.seen":"2024-04-20T01:35:08.321898260Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7727 chars]
	I0419 18:58:04.001183   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:04.001183   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:04.001183   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:04.001183   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:04.004187   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:04.004187   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:04.004187   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:04.004187   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:03 GMT
	I0419 18:58:04.004187   14960 round_trippers.go:580]     Audit-Id: 4ac8285d-40e6-4016-8a1e-83d3ea5ad269
	I0419 18:58:04.004187   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:04.004187   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:04.004187   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:04.004187   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:04.005251   14960 pod_ready.go:97] node "multinode-348000" hosting pod "kube-controller-manager-multinode-348000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:04.005251   14960 pod_ready.go:81] duration metric: took 8.0746ms for pod "kube-controller-manager-multinode-348000" in "kube-system" namespace to be "Ready" ...
	E0419 18:58:04.005251   14960 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348000" hosting pod "kube-controller-manager-multinode-348000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:04.005251   14960 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2jjsq" in "kube-system" namespace to be "Ready" ...
	I0419 18:58:04.157508   14960 request.go:629] Waited for 152.025ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2jjsq
	I0419 18:58:04.157750   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2jjsq
	I0419 18:58:04.157847   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:04.157885   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:04.157885   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:04.161740   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:04.161740   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:04.161740   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:04.161740   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:04.161740   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:04.161740   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:04.161740   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:04 GMT
	I0419 18:58:04.161740   14960 round_trippers.go:580]     Audit-Id: 6d22071c-5fdf-4004-b73c-2dede9ef23cc
	I0419 18:58:04.162270   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2jjsq","generateName":"kube-proxy-","namespace":"kube-system","uid":"f9666ab7-0d1f-4800-b979-6e38fecdc518","resourceVersion":"1708","creationTimestamp":"2024-04-20T01:42:52Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:42:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0419 18:58:04.346096   14960 request.go:629] Waited for 183.6347ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m03
	I0419 18:58:04.346396   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m03
	I0419 18:58:04.346396   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:04.346396   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:04.346396   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:04.349148   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:04.349148   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:04.349148   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:04.349148   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:04.349148   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:04.349148   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:04 GMT
	I0419 18:58:04.349148   14960 round_trippers.go:580]     Audit-Id: f0d1db07-cdf3-4770-8c2a-ab980582dd97
	I0419 18:58:04.349148   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:04.350261   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m03","uid":"08bfca2d-b382-4052-a5b6-0a78bee7caef","resourceVersion":"1716","creationTimestamp":"2024-04-20T01:53:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_53_29_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:53:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4398 chars]
	I0419 18:58:04.350383   14960 pod_ready.go:97] node "multinode-348000-m03" hosting pod "kube-proxy-2jjsq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000-m03" has status "Ready":"Unknown"
	I0419 18:58:04.350383   14960 pod_ready.go:81] duration metric: took 345.131ms for pod "kube-proxy-2jjsq" in "kube-system" namespace to be "Ready" ...
	E0419 18:58:04.350383   14960 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348000-m03" hosting pod "kube-proxy-2jjsq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000-m03" has status "Ready":"Unknown"
	I0419 18:58:04.350383   14960 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bjv9b" in "kube-system" namespace to be "Ready" ...
	I0419 18:58:04.548899   14960 request.go:629] Waited for 197.702ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bjv9b
	I0419 18:58:04.549179   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bjv9b
	I0419 18:58:04.549179   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:04.549179   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:04.549179   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:04.554692   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:04.554752   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:04.554752   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:04.554752   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:04.554752   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:04.554752   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:04 GMT
	I0419 18:58:04.554752   14960 round_trippers.go:580]     Audit-Id: f4192125-d973-425f-aa85-2c5ce20d2b95
	I0419 18:58:04.554817   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:04.554817   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bjv9b","generateName":"kube-proxy-","namespace":"kube-system","uid":"3e909d14-543a-4734-8c17-7e2b8188553d","resourceVersion":"601","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5836 chars]
	I0419 18:58:04.752452   14960 request.go:629] Waited for 196.4962ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:58:04.752452   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:58:04.752590   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:04.752590   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:04.752590   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:04.760087   14960 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 18:58:04.760161   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:04.760201   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:04.760201   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:04.760201   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:04 GMT
	I0419 18:58:04.760201   14960 round_trippers.go:580]     Audit-Id: 6c52c0de-d8cf-4947-bbfc-7230f03415ff
	I0419 18:58:04.760201   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:04.760241   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:04.760561   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"1672","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3826 chars]
	I0419 18:58:04.761342   14960 pod_ready.go:92] pod "kube-proxy-bjv9b" in "kube-system" namespace has status "Ready":"True"
	I0419 18:58:04.761367   14960 pod_ready.go:81] duration metric: took 410.9835ms for pod "kube-proxy-bjv9b" in "kube-system" namespace to be "Ready" ...
	I0419 18:58:04.761425   14960 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kj76x" in "kube-system" namespace to be "Ready" ...
	I0419 18:58:04.954605   14960 request.go:629] Waited for 192.8908ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kj76x
	I0419 18:58:04.954827   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kj76x
	I0419 18:58:04.954827   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:04.954827   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:04.954827   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:04.960479   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:04.960566   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:04.960566   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:04.960566   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:04 GMT
	I0419 18:58:04.960566   14960 round_trippers.go:580]     Audit-Id: b20d923f-8d8d-40b4-8e8b-d07f98d5f39f
	I0419 18:58:04.960566   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:04.960643   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:04.960643   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:04.960754   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kj76x","generateName":"kube-proxy-","namespace":"kube-system","uid":"274342c4-c21f-4279-b0ea-743d8e2c1463","resourceVersion":"1750","creationTimestamp":"2024-04-20T01:35:22Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0419 18:58:05.158285   14960 request.go:629] Waited for 196.5957ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:05.158285   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:05.158285   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:05.158285   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:05.158285   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:05.161856   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:05.161856   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:05.161856   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:05.161856   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:05 GMT
	I0419 18:58:05.161856   14960 round_trippers.go:580]     Audit-Id: c92c9891-ee0b-4fc1-a733-5bed4130decf
	I0419 18:58:05.161856   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:05.161856   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:05.161856   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:05.162693   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:05.162886   14960 pod_ready.go:97] node "multinode-348000" hosting pod "kube-proxy-kj76x" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:05.162886   14960 pod_ready.go:81] duration metric: took 401.4601ms for pod "kube-proxy-kj76x" in "kube-system" namespace to be "Ready" ...
	E0419 18:58:05.162886   14960 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348000" hosting pod "kube-proxy-kj76x" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:05.162886   14960 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:58:05.347301   14960 request.go:629] Waited for 184.415ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348000
	I0419 18:58:05.347575   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348000
	I0419 18:58:05.347637   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:05.347637   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:05.347637   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:05.351227   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:05.351844   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:05.351844   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:05 GMT
	I0419 18:58:05.351844   14960 round_trippers.go:580]     Audit-Id: ed475d8a-c6b2-41a8-8400-fe09cbd6b310
	I0419 18:58:05.351844   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:05.351844   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:05.351926   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:05.351926   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:05.352241   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-348000","namespace":"kube-system","uid":"000cfafe-a513-4738-9de2-3c25244b72be","resourceVersion":"1737","creationTimestamp":"2024-04-20T01:35:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"92813b2aed63b63058d3fd06709fa24e","kubernetes.io/config.mirror":"92813b2aed63b63058d3fd06709fa24e","kubernetes.io/config.seen":"2024-04-20T01:35:08.321899460Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5439 chars]
	I0419 18:58:05.550879   14960 request.go:629] Waited for 198.04ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:05.551253   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:05.551322   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:05.551343   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:05.551343   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:05.556254   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:05.556254   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:05.556254   14960 round_trippers.go:580]     Audit-Id: dd6509d5-df5f-4a04-b3f6-3af2738c486b
	I0419 18:58:05.556254   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:05.556254   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:05.557129   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:05.557129   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:05.557129   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:05 GMT
	I0419 18:58:05.557533   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:05.558032   14960 pod_ready.go:97] node "multinode-348000" hosting pod "kube-scheduler-multinode-348000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:05.558162   14960 pod_ready.go:81] duration metric: took 395.2748ms for pod "kube-scheduler-multinode-348000" in "kube-system" namespace to be "Ready" ...
	E0419 18:58:05.558162   14960 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348000" hosting pod "kube-scheduler-multinode-348000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000" has status "Ready":"False"
	I0419 18:58:05.558232   14960 pod_ready.go:38] duration metric: took 1.6031944s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 18:58:05.558232   14960 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0419 18:58:05.581016   14960 command_runner.go:130] > -16
	I0419 18:58:05.581016   14960 ops.go:34] apiserver oom_adj: -16
	I0419 18:58:05.581095   14960 kubeadm.go:591] duration metric: took 13.0618794s to restartPrimaryControlPlane
	I0419 18:58:05.581095   14960 kubeadm.go:393] duration metric: took 13.1309941s to StartCluster
	I0419 18:58:05.581163   14960 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:58:05.581326   14960 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 18:58:05.583108   14960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 18:58:05.584706   14960 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.42.24 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 18:58:05.584706   14960 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0419 18:58:05.590659   14960 out.go:177] * Verifying Kubernetes components...
	I0419 18:58:05.585386   14960 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 18:58:05.595154   14960 out.go:177] * Enabled addons: 
	I0419 18:58:05.599861   14960 addons.go:505] duration metric: took 15.1552ms for enable addons: enabled=[]
	I0419 18:58:05.611465   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 18:58:05.940894   14960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 18:58:05.977364   14960 node_ready.go:35] waiting up to 6m0s for node "multinode-348000" to be "Ready" ...
	I0419 18:58:05.977364   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:05.977364   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:05.977364   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:05.977364   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:05.980929   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:05.980929   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:05.980929   14960 round_trippers.go:580]     Audit-Id: 5b823e72-37e3-4749-8e3c-817044127e8b
	I0419 18:58:05.980929   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:05.980929   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:05.980929   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:05.980929   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:05.981558   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:05 GMT
	I0419 18:58:05.981779   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:06.493376   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:06.493376   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:06.493501   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:06.493501   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:06.497243   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:06.497243   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:06.497243   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:06.498029   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:06.498029   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:06.498029   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:06 GMT
	I0419 18:58:06.498029   14960 round_trippers.go:580]     Audit-Id: dc04da86-0711-4288-ab53-bafaf5cafc85
	I0419 18:58:06.498029   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:06.498446   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:06.988833   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:06.988952   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:06.988952   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:06.988952   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:06.999126   14960 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0419 18:58:06.999126   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:06.999126   14960 round_trippers.go:580]     Audit-Id: a9e144a2-0f2b-47e4-ac7e-6908a1386f24
	I0419 18:58:06.999126   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:06.999126   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:06.999126   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:06.999126   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:06.999126   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:06 GMT
	I0419 18:58:06.999946   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:07.489616   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:07.489616   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:07.489616   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:07.489616   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:07.494221   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:07.494221   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:07.494474   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:07.494474   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:07.494474   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:07 GMT
	I0419 18:58:07.494474   14960 round_trippers.go:580]     Audit-Id: 9e82518f-ed96-499e-959c-993a7581e1bd
	I0419 18:58:07.494474   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:07.494474   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:07.494703   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:07.993160   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:07.993160   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:07.993160   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:07.993160   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:07.996761   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:07.997823   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:07.997823   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:07.997823   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:07.997823   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:07 GMT
	I0419 18:58:07.997939   14960 round_trippers.go:580]     Audit-Id: 7b222396-a5a7-4761-8afa-58e024e1d86e
	I0419 18:58:07.997939   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:07.997939   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:07.998141   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:07.998540   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:08.491774   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:08.491774   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:08.491774   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:08.491774   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:08.496678   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:08.496678   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:08.496678   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:08.496678   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:08.496678   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:08.496678   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:08 GMT
	I0419 18:58:08.496945   14960 round_trippers.go:580]     Audit-Id: d7f073fc-ebc5-4a53-864b-88298ab470ce
	I0419 18:58:08.496945   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:08.497032   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:08.990889   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:08.990889   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:08.990889   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:08.990889   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:08.995433   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:08.995433   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:08.995433   14960 round_trippers.go:580]     Audit-Id: 62277be1-7b3b-497f-9691-0be4e0d6903b
	I0419 18:58:08.995433   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:08.995520   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:08.995520   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:08.995520   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:08.995520   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:08 GMT
	I0419 18:58:08.995718   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:09.487311   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:09.487502   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:09.487502   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:09.487502   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:09.492797   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:09.492797   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:09.492797   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:09.492797   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:09.492797   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:09 GMT
	I0419 18:58:09.492797   14960 round_trippers.go:580]     Audit-Id: 0ec245d4-c9ed-4dd1-b22d-c14e3eed2e8e
	I0419 18:58:09.492797   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:09.492797   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:09.492797   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:09.986137   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:09.986215   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:09.986215   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:09.986215   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:09.989601   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:09.989601   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:09.989601   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:09 GMT
	I0419 18:58:09.989601   14960 round_trippers.go:580]     Audit-Id: 198f07e3-9c4a-4bc0-a94a-a301104e275a
	I0419 18:58:09.989601   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:09.989601   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:09.989601   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:09.989601   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:09.990517   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:10.483090   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:10.483166   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:10.483166   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:10.483166   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:10.487627   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:10.487843   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:10.487843   14960 round_trippers.go:580]     Audit-Id: 7f0144b9-0b59-4814-98e4-04748ec905a7
	I0419 18:58:10.487843   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:10.487843   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:10.487843   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:10.487843   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:10.487843   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:10 GMT
	I0419 18:58:10.488211   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:10.488591   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:10.981905   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:10.981905   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:10.981905   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:10.981905   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:10.987221   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:10.987221   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:10.987221   14960 round_trippers.go:580]     Audit-Id: a21010df-4f10-4e74-b424-3c97f295e0c9
	I0419 18:58:10.987221   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:10.987221   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:10.987221   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:10.987221   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:10.987221   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:10 GMT
	I0419 18:58:10.987221   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:11.482219   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:11.482423   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:11.482423   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:11.482423   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:11.486379   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:11.486379   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:11.486379   14960 round_trippers.go:580]     Audit-Id: 3a82bfc8-f5b2-4c8e-aba6-d1190ccfe77f
	I0419 18:58:11.486379   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:11.486814   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:11.486814   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:11.486814   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:11.486863   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:11 GMT
	I0419 18:58:11.487131   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:11.984375   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:11.984375   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:11.984616   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:11.984616   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:11.989438   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:11.989438   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:11.989438   14960 round_trippers.go:580]     Audit-Id: 6c34e4b9-5cf7-4238-97ee-47f52fbcb9df
	I0419 18:58:11.989512   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:11.989512   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:11.989512   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:11.989512   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:11.989512   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:11 GMT
	I0419 18:58:11.989622   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:12.483365   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:12.483440   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:12.483440   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:12.483440   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:12.487812   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:12.487914   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:12.487914   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:12.487914   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:12.487914   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:12.487914   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:12.487914   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:12 GMT
	I0419 18:58:12.487914   14960 round_trippers.go:580]     Audit-Id: b0cb6da3-1724-4c6a-86da-60aca41f3b7a
	I0419 18:58:12.488217   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:12.488873   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:12.985769   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:12.985769   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:12.985769   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:12.985769   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:12.989854   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:12.989854   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:12.989854   14960 round_trippers.go:580]     Audit-Id: 596287ac-2ce3-4d07-a8cd-25176a9e90b3
	I0419 18:58:12.989854   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:12.989979   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:12.989979   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:12.989979   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:12.989979   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:12 GMT
	I0419 18:58:12.990099   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1729","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0419 18:58:13.491899   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:13.491958   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:13.491958   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:13.491958   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:13.496840   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:13.496840   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:13.496840   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:13 GMT
	I0419 18:58:13.496840   14960 round_trippers.go:580]     Audit-Id: a9be612e-f745-4164-8b9b-a485ab202080
	I0419 18:58:13.496840   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:13.496840   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:13.496840   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:13.496840   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:13.497232   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:13.992146   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:13.992194   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:13.992194   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:13.992194   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:13.996852   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:13.996852   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:13.996852   14960 round_trippers.go:580]     Audit-Id: 698f39fd-db9e-490d-9655-093ee63efa8c
	I0419 18:58:13.996852   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:13.996852   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:13.996976   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:13.996976   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:13.996976   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:13 GMT
	I0419 18:58:13.997315   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:14.489979   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:14.490085   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:14.490085   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:14.490085   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:14.497911   14960 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 18:58:14.497911   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:14.497911   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:14.497911   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:14.497911   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:14.498111   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:14 GMT
	I0419 18:58:14.498111   14960 round_trippers.go:580]     Audit-Id: c96ca703-b5cc-483e-99fd-9b542ee5fc5d
	I0419 18:58:14.498111   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:14.498190   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:14.498858   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:14.992458   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:14.992458   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:14.992458   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:14.992458   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:14.996967   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:14.996967   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:14.997046   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:14.997069   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:14.997069   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:14.997069   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:14 GMT
	I0419 18:58:14.997069   14960 round_trippers.go:580]     Audit-Id: e74cccc8-e5fd-474f-91c6-31742f8ef8e7
	I0419 18:58:14.997069   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:14.997261   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:15.481566   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:15.481903   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:15.481984   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:15.481984   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:15.485636   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:15.485636   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:15.485636   14960 round_trippers.go:580]     Audit-Id: 385d8f41-1bac-4a6e-859d-db69eb2127e6
	I0419 18:58:15.485636   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:15.486094   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:15.486094   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:15.486094   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:15.486094   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:15 GMT
	I0419 18:58:15.486175   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:15.978206   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:15.978295   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:15.978357   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:15.978357   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:15.982784   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:15.982784   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:15.982984   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:15 GMT
	I0419 18:58:15.982984   14960 round_trippers.go:580]     Audit-Id: 6b9e2378-61f9-4b9d-a565-32ec9d4be0ef
	I0419 18:58:15.982984   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:15.982984   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:15.982984   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:15.982984   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:15.983496   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:16.480408   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:16.480408   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:16.480408   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:16.480408   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:16.483674   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:16.483674   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:16.483674   14960 round_trippers.go:580]     Audit-Id: 5c9a618b-6387-431b-a73e-20d5f3a6eff9
	I0419 18:58:16.484597   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:16.484597   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:16.484597   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:16.484597   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:16.484597   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:16 GMT
	I0419 18:58:16.484930   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:16.984638   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:16.984638   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:16.984767   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:16.984767   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:16.994928   14960 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0419 18:58:16.995717   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:16.995717   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:16.995717   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:16.995717   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:16.995717   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:16.995788   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:16 GMT
	I0419 18:58:16.995788   14960 round_trippers.go:580]     Audit-Id: c7dc3c38-79fa-429e-9797-29632803151b
	I0419 18:58:16.996087   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:16.996438   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:17.483287   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:17.483287   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:17.483287   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:17.483287   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:17.486940   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:17.486940   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:17.486940   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:17 GMT
	I0419 18:58:17.486940   14960 round_trippers.go:580]     Audit-Id: b877e136-6c02-4ac7-963d-4ab9cc1dab52
	I0419 18:58:17.486940   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:17.487943   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:17.487943   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:17.487977   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:17.488298   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:17.983477   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:17.983560   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:17.983560   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:17.983560   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:17.990789   14960 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 18:58:17.990789   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:17.990789   14960 round_trippers.go:580]     Audit-Id: d3b51334-e138-4546-98aa-fabc11237f10
	I0419 18:58:17.990789   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:17.990789   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:17.990789   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:17.990789   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:17.990789   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:17 GMT
	I0419 18:58:17.990789   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:18.480135   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:18.480135   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:18.480135   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:18.480135   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:18.484237   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:18.484237   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:18.484314   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:18.484415   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:18.484415   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:18 GMT
	I0419 18:58:18.484415   14960 round_trippers.go:580]     Audit-Id: 5a88e0f4-805e-4b41-bada-445c0481d452
	I0419 18:58:18.484415   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:18.484500   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:18.484705   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:18.989029   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:18.989029   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:18.989103   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:18.989103   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:18.992479   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:18.992479   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:18.992479   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:18.992479   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:18.992479   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:18 GMT
	I0419 18:58:18.992479   14960 round_trippers.go:580]     Audit-Id: 51f8bd66-080e-439c-884d-ea12bc0123b1
	I0419 18:58:18.993444   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:18.993444   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:18.993606   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:19.486313   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:19.486313   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:19.486313   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:19.486313   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:19.491910   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:19.492948   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:19.492948   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:19.492948   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:19.492948   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:19.492948   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:19.492948   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:19 GMT
	I0419 18:58:19.492948   14960 round_trippers.go:580]     Audit-Id: 394534e3-c860-4482-9825-50c3edd558ee
	I0419 18:58:19.493195   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:19.493787   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:19.985432   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:19.985432   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:19.985432   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:19.985432   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:19.990055   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:19.990055   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:19.990055   14960 round_trippers.go:580]     Audit-Id: c5a00eff-7f19-472b-aa46-2ec59e6653d7
	I0419 18:58:19.990055   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:19.990055   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:19.990055   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:19.990055   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:19.990055   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:19 GMT
	I0419 18:58:19.990514   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:20.484249   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:20.484249   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:20.484249   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:20.484249   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:20.489151   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:20.489151   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:20.489151   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:20.489151   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:20.489151   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:20 GMT
	I0419 18:58:20.489151   14960 round_trippers.go:580]     Audit-Id: ecf59d52-fee9-4c61-89cb-59ff2e124630
	I0419 18:58:20.489363   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:20.489363   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:20.489686   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:20.987089   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:20.987089   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:20.987089   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:20.987089   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:20.991601   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:20.991601   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:20.991601   14960 round_trippers.go:580]     Audit-Id: 91378339-774f-40c2-99fc-3a9c160db851
	I0419 18:58:20.991601   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:20.991707   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:20.991707   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:20.991707   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:20.991707   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:20 GMT
	I0419 18:58:20.991839   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:21.487012   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:21.487251   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:21.487251   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:21.487251   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:21.491637   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:21.491696   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:21.491696   14960 round_trippers.go:580]     Audit-Id: 174ae4b8-573e-46a3-85e1-b5133f6aefe6
	I0419 18:58:21.491696   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:21.491696   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:21.491696   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:21.491696   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:21.491771   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:21 GMT
	I0419 18:58:21.492122   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:21.986202   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:21.986269   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:21.986269   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:21.986269   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:21.990059   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:21.990364   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:21.990430   14960 round_trippers.go:580]     Audit-Id: 355ca03f-5bfe-4dcc-a187-fa7ba5ccc8ee
	I0419 18:58:21.990430   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:21.990430   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:21.990430   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:21.990430   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:21.990430   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:21 GMT
	I0419 18:58:21.991726   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:21.991992   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:22.484941   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:22.484941   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:22.484941   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:22.484941   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:22.488937   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:22.489385   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:22.489472   14960 round_trippers.go:580]     Audit-Id: 8a88ec10-1a6c-41fb-b998-329bf8c60ca5
	I0419 18:58:22.489472   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:22.489472   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:22.489472   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:22.489472   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:22.489472   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:22 GMT
	I0419 18:58:22.489472   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:22.985695   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:22.985939   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:22.985939   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:22.985939   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:22.990270   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:22.990495   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:22.990495   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:22.990495   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:22.990495   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:22.990495   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:22.990652   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:22 GMT
	I0419 18:58:22.990735   14960 round_trippers.go:580]     Audit-Id: 904756ba-e5c0-4ab2-8c78-e62ab91e1b24
	I0419 18:58:22.991032   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:23.488472   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:23.488646   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:23.488646   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:23.488720   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:23.492523   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:23.492523   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:23.492523   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:23 GMT
	I0419 18:58:23.492523   14960 round_trippers.go:580]     Audit-Id: 5d7a24d2-395b-497c-9461-24a435416f57
	I0419 18:58:23.493438   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:23.493438   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:23.493438   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:23.493511   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:23.493955   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:23.989990   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:23.989990   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:23.989990   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:23.989990   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:23.996348   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:58:23.996348   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:23.996348   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:23.996731   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:23 GMT
	I0419 18:58:23.996731   14960 round_trippers.go:580]     Audit-Id: b7ca2701-8de8-4a09-b367-ab2626abd839
	I0419 18:58:23.996731   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:23.996731   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:23.996731   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:23.996897   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:23.997428   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:24.491745   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:24.491913   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:24.491913   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:24.492006   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:24.497994   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:24.497994   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:24.497994   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:24.497994   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:24 GMT
	I0419 18:58:24.498541   14960 round_trippers.go:580]     Audit-Id: 590b8801-1035-431c-b3db-2b5bedccac75
	I0419 18:58:24.498541   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:24.498541   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:24.498541   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:24.498741   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:24.988768   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:24.988768   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:24.988768   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:24.988768   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:24.992625   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:24.992625   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:24.992625   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:24.992625   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:24.992740   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:24.992740   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:24 GMT
	I0419 18:58:24.992740   14960 round_trippers.go:580]     Audit-Id: 40de5aeb-49e7-4cec-b1d1-3226c37e5be3
	I0419 18:58:24.992740   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:24.992973   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:25.491781   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:25.491984   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:25.492060   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:25.492060   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:25.495401   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:25.495401   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:25.495594   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:25.495594   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:25.495594   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:25.495594   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:25 GMT
	I0419 18:58:25.495594   14960 round_trippers.go:580]     Audit-Id: 0dac004f-567e-472a-b39d-e812c05dcc14
	I0419 18:58:25.495594   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:25.495925   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:25.992044   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:25.992319   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:25.992319   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:25.992319   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:25.996781   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:25.996781   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:25.996781   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:25.996781   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:25.996781   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:25.996781   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:25 GMT
	I0419 18:58:25.996781   14960 round_trippers.go:580]     Audit-Id: ab791579-2850-4a4e-ad33-3ddc721d9eaf
	I0419 18:58:25.996781   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:25.996781   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:26.492753   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:26.492753   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:26.492885   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:26.492885   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:26.497033   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:26.497092   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:26.497092   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:26.497092   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:26.497157   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:26.497157   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:26 GMT
	I0419 18:58:26.497157   14960 round_trippers.go:580]     Audit-Id: cf7b4c78-6f19-4646-b59b-e83f89eac2d9
	I0419 18:58:26.497157   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:26.497251   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:26.498131   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:26.979727   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:26.979727   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:26.979727   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:26.979727   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:26.982294   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:26.983326   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:26.983326   14960 round_trippers.go:580]     Audit-Id: 7e70477f-f172-4bc2-be7f-39abd031b1e8
	I0419 18:58:26.983326   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:26.983326   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:26.983326   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:26.983326   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:26.983326   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:26 GMT
	I0419 18:58:26.983528   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:27.481485   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:27.481485   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:27.482026   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:27.482026   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:27.485468   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:27.485468   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:27.486417   14960 round_trippers.go:580]     Audit-Id: a8e51a40-fb74-4ff7-97e0-44d54f09be54
	I0419 18:58:27.486621   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:27.486621   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:27.486621   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:27.486621   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:27.486621   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:27 GMT
	I0419 18:58:27.486862   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:27.981256   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:27.981256   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:27.981256   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:27.981256   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:27.985410   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:27.985837   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:27.985837   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:27 GMT
	I0419 18:58:27.985837   14960 round_trippers.go:580]     Audit-Id: 2e1e06bc-1c28-44fc-950f-a4500c753538
	I0419 18:58:27.985837   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:27.985837   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:27.985837   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:27.985837   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:27.986173   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:28.481822   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:28.481822   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:28.482063   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:28.482063   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:28.486243   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:28.486243   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:28.486243   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:28 GMT
	I0419 18:58:28.486243   14960 round_trippers.go:580]     Audit-Id: d235a4c3-9ce5-4a4a-9321-6bc4bc6eaf50
	I0419 18:58:28.486865   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:28.486865   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:28.486865   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:28.486865   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:28.487214   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:28.983975   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:28.983975   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:28.984102   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:28.984102   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:28.988037   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:28.988037   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:28.988037   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:28 GMT
	I0419 18:58:28.988500   14960 round_trippers.go:580]     Audit-Id: 52520614-da2c-4c7c-9a51-7e7bc6328c02
	I0419 18:58:28.988500   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:28.988500   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:28.988500   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:28.988500   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:28.988738   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:28.989684   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:29.484695   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:29.484695   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:29.484695   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:29.484695   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:29.489433   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:29.489433   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:29.489433   14960 round_trippers.go:580]     Audit-Id: 1bb7751c-3178-4dfa-99e2-d69d98abec80
	I0419 18:58:29.489433   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:29.489433   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:29.489433   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:29.490241   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:29.490241   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:29 GMT
	I0419 18:58:29.490936   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:29.981921   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:29.981992   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:29.982018   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:29.982018   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:29.986023   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:29.986023   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:29.986023   14960 round_trippers.go:580]     Audit-Id: e3d4a60a-a553-4544-975b-96b45c101e85
	I0419 18:58:29.986023   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:29.986621   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:29.986621   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:29.986621   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:29.986621   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:29 GMT
	I0419 18:58:29.986621   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:30.480661   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:30.480772   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:30.480772   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:30.480772   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:30.485136   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:30.485234   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:30.485234   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:30.485234   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:30.485234   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:30 GMT
	I0419 18:58:30.485234   14960 round_trippers.go:580]     Audit-Id: 1a1ee964-b3b5-42fa-bb7b-aaaee26dffbc
	I0419 18:58:30.485234   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:30.485234   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:30.485633   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:30.979752   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:30.979852   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:30.979852   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:30.979852   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:30.984171   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:30.984171   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:30.984434   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:30.984434   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:30.984434   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:30 GMT
	I0419 18:58:30.984434   14960 round_trippers.go:580]     Audit-Id: 2c683212-bcf8-4ec1-b437-3e6484c70512
	I0419 18:58:30.984434   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:30.984434   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:30.984648   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:31.479048   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:31.479341   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:31.479341   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:31.479341   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:31.484054   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:31.484054   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:31.484153   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:31.484153   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:31 GMT
	I0419 18:58:31.484153   14960 round_trippers.go:580]     Audit-Id: 9b6503df-3cdc-41a8-b77d-1243dbfe99ed
	I0419 18:58:31.484153   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:31.484153   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:31.484153   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:31.484351   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:31.485282   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:31.979100   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:31.979186   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:31.979186   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:31.979186   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:31.983861   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:31.983959   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:31.983959   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:31 GMT
	I0419 18:58:31.983959   14960 round_trippers.go:580]     Audit-Id: f4b80f4c-27ba-4c8e-951a-ec6c211ea215
	I0419 18:58:31.983959   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:31.983959   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:31.984049   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:31.984049   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:31.984082   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:32.492476   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:32.492538   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:32.492538   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:32.492538   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:32.496203   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:32.496203   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:32.496203   14960 round_trippers.go:580]     Audit-Id: 9106f3b5-f4a9-4013-a952-2a4cbcd86b91
	I0419 18:58:32.496203   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:32.496203   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:32.497208   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:32.497208   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:32.497208   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:32 GMT
	I0419 18:58:32.497626   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:32.991994   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:32.992203   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:32.992203   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:32.992203   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:32.997720   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:32.997815   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:32.997815   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:32 GMT
	I0419 18:58:32.997815   14960 round_trippers.go:580]     Audit-Id: e1be733d-d529-4b15-a3b3-db872c0af358
	I0419 18:58:32.997815   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:32.997815   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:32.997815   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:32.997815   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:32.998041   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:33.479218   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:33.479218   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:33.479218   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:33.479218   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:33.484075   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:33.484304   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:33.484304   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:33.484304   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:33.484304   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:33.484304   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:33 GMT
	I0419 18:58:33.484304   14960 round_trippers.go:580]     Audit-Id: 60c35584-2256-45dc-9f66-ac614b0d23f2
	I0419 18:58:33.484304   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:33.484726   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:33.992946   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:33.993031   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:33.993031   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:33.993031   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:33.997675   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:33.998044   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:33.998044   14960 round_trippers.go:580]     Audit-Id: 3cce2461-88c3-4efd-a922-113ef0176de6
	I0419 18:58:33.998044   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:33.998044   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:33.998044   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:33.998044   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:33.998125   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:33 GMT
	I0419 18:58:33.998703   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:33.999045   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:34.478396   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:34.478580   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:34.478580   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:34.478580   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:34.483102   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:34.484220   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:34.484245   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:34.484245   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:34.484245   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:34.484351   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:34.484351   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:34 GMT
	I0419 18:58:34.484379   14960 round_trippers.go:580]     Audit-Id: 7990ce83-830a-406c-bedc-1b471a256f80
	I0419 18:58:34.484576   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:34.979897   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:34.979897   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:34.980138   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:34.980138   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:34.986821   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:58:34.986821   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:34.986821   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:34 GMT
	I0419 18:58:34.986821   14960 round_trippers.go:580]     Audit-Id: d438732c-5f71-46fe-a51f-6324c857fcb3
	I0419 18:58:34.986821   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:34.986821   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:34.986821   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:34.986821   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:34.987479   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:35.487415   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:35.487415   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:35.487415   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:35.487415   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:35.492111   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:35.492111   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:35.492507   14960 round_trippers.go:580]     Audit-Id: f0684838-0d5b-46fe-873f-9307f8f29e58
	I0419 18:58:35.492507   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:35.492507   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:35.492507   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:35.492507   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:35.492563   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:35 GMT
	I0419 18:58:35.493149   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:35.987947   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:35.987947   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:35.987947   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:35.987947   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:35.992540   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:35.993000   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:35.993000   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:35.993000   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:35.993000   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:35 GMT
	I0419 18:58:35.993000   14960 round_trippers.go:580]     Audit-Id: 7da89ce3-966b-4635-8ded-cbe3a7720279
	I0419 18:58:35.993000   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:35.993000   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:35.993568   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:36.491899   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:36.491899   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:36.491899   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:36.491899   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:36.495733   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:36.495733   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:36.495733   14960 round_trippers.go:580]     Audit-Id: 39d1e45d-df40-41b2-a6be-b6569e69f885
	I0419 18:58:36.495733   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:36.495733   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:36.495733   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:36.495733   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:36.495733   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:36 GMT
	I0419 18:58:36.498301   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:36.498301   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:36.989042   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:36.989042   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:36.989042   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:36.989042   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:36.992655   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:36.992936   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:36.992936   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:36.992936   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:36.992936   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:36.993042   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:36 GMT
	I0419 18:58:36.993042   14960 round_trippers.go:580]     Audit-Id: e758599f-a7ce-4323-8b61-4c8330646142
	I0419 18:58:36.993042   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:36.993223   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:37.480047   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:37.480105   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:37.480162   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:37.480162   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:37.484810   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:37.485835   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:37.485835   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:37.485835   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:37.485835   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:37.485835   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:37.485835   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:37 GMT
	I0419 18:58:37.485835   14960 round_trippers.go:580]     Audit-Id: a80fdb9a-a5bd-4899-a07f-2f927f422a4e
	I0419 18:58:37.485835   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:37.978082   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:37.978082   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:37.978082   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:37.978082   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:37.982673   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:37.983234   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:37.983234   14960 round_trippers.go:580]     Audit-Id: c8b7a7d4-9fef-4e1c-a142-0ae3273042b5
	I0419 18:58:37.983315   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:37.983383   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:37.983458   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:37.983579   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:37.983951   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:37 GMT
	I0419 18:58:37.983995   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:38.478420   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:38.478597   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:38.478597   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:38.478597   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:38.483044   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:38.483044   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:38.483435   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:38 GMT
	I0419 18:58:38.483435   14960 round_trippers.go:580]     Audit-Id: 59e612c9-8ee3-4ec9-bb2f-040f12903b73
	I0419 18:58:38.483435   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:38.483478   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:38.483478   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:38.483478   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:38.483478   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:38.992775   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:38.992775   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:38.992775   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:38.992775   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:38.997391   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:38.997678   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:38.997678   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:38.997678   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:38.997678   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:38 GMT
	I0419 18:58:38.997678   14960 round_trippers.go:580]     Audit-Id: aa3a7d39-2a6e-4cdd-b0c7-993a7f6810b5
	I0419 18:58:38.997678   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:38.997678   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:38.997939   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:38.998405   14960 node_ready.go:53] node "multinode-348000" has status "Ready":"False"
	I0419 18:58:39.478348   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:39.478455   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:39.478455   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:39.478455   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:39.482274   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:39.482274   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:39.482607   14960 round_trippers.go:580]     Audit-Id: 2bb32623-0683-4ce3-ab82-7bf09fe69820
	I0419 18:58:39.482607   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:39.482607   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:39.482607   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:39.482607   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:39.482607   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:39 GMT
	I0419 18:58:39.482665   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:39.979580   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:39.979580   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:39.979580   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:39.979673   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:39.983028   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:39.983028   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:39.983762   14960 round_trippers.go:580]     Audit-Id: 57250717-021f-4608-8915-7976dba89df6
	I0419 18:58:39.983762   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:39.983762   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:39.983762   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:39.983762   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:39.983762   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:39 GMT
	I0419 18:58:39.983947   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1852","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0419 18:58:40.479580   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:40.479580   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:40.479580   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:40.479580   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:40.483201   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:40.483201   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:40.483201   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:40.483201   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:40 GMT
	I0419 18:58:40.483201   14960 round_trippers.go:580]     Audit-Id: fcdf4806-7432-46d5-b0ba-8c814f2a72b8
	I0419 18:58:40.483201   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:40.483201   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:40.483201   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:40.484393   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1901","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0419 18:58:40.485002   14960 node_ready.go:49] node "multinode-348000" has status "Ready":"True"
	I0419 18:58:40.485002   14960 node_ready.go:38] duration metric: took 34.5075655s for node "multinode-348000" to be "Ready" ...
	I0419 18:58:40.485002   14960 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 18:58:40.485187   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods
	I0419 18:58:40.485187   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:40.485187   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:40.485187   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:40.491960   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:58:40.491960   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:40.491960   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:40.492179   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:40.492179   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:40.492179   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:40 GMT
	I0419 18:58:40.492179   14960 round_trippers.go:580]     Audit-Id: 7c96723e-ab3e-495e-9131-b60af96c0f86
	I0419 18:58:40.492179   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:40.493699   14960 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1901"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86508 chars]
	I0419 18:58:40.498270   14960 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace to be "Ready" ...
	I0419 18:58:40.498514   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:40.498571   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:40.498595   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:40.498595   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:40.501450   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:40.501450   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:40.501450   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:40.501450   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:40.501450   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:40 GMT
	I0419 18:58:40.501450   14960 round_trippers.go:580]     Audit-Id: 262fd8ff-e2ea-4238-9261-a77d31124661
	I0419 18:58:40.501450   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:40.501450   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:40.501450   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:40.501450   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:40.501450   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:40.501450   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:40.501450   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:40.504458   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:40.504458   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:40.504458   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:40.504458   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:40 GMT
	I0419 18:58:40.504458   14960 round_trippers.go:580]     Audit-Id: 9bead262-6e5d-4fc7-8512-d581f960899d
	I0419 18:58:40.504458   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:40.504458   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:40.504458   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:40.505438   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1901","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0419 18:58:41.010413   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:41.010413   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:41.010413   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:41.010413   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:41.016549   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:58:41.016549   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:41.016549   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:41.016549   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:41.016549   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:41.016549   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:41.016549   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:41 GMT
	I0419 18:58:41.016549   14960 round_trippers.go:580]     Audit-Id: c0625f30-a712-402f-937a-6c78a76b7102
	I0419 18:58:41.016549   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:41.017670   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:41.017670   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:41.017755   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:41.017755   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:41.020594   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:41.020594   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:41.020594   14960 round_trippers.go:580]     Audit-Id: f42508d5-a2c5-4729-af66-b9722c159054
	I0419 18:58:41.021166   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:41.021166   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:41.021166   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:41.021166   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:41.021166   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:41 GMT
	I0419 18:58:41.021443   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1901","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0419 18:58:41.513965   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:41.514023   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:41.514023   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:41.514023   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:41.519082   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:41.519082   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:41.519082   14960 round_trippers.go:580]     Audit-Id: 1765940f-e494-44b8-9bf6-962e270c084e
	I0419 18:58:41.519082   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:41.519082   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:41.519082   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:41.519082   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:41.519184   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:41 GMT
	I0419 18:58:41.519184   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:41.520067   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:41.520157   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:41.520157   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:41.520157   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:41.523650   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:41.523650   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:41.523650   14960 round_trippers.go:580]     Audit-Id: 21ba301e-0b7e-4d46-8403-882f74962b6c
	I0419 18:58:41.523650   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:41.523650   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:41.523650   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:41.523650   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:41.523650   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:41 GMT
	I0419 18:58:41.523650   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1901","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0419 18:58:42.012364   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:42.012440   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:42.012519   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:42.012519   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:42.016760   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:42.016760   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:42.016760   14960 round_trippers.go:580]     Audit-Id: 432744f6-25be-4bcf-b2b8-92e595f26fb5
	I0419 18:58:42.016760   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:42.016760   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:42.016760   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:42.016760   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:42.016760   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:42 GMT
	I0419 18:58:42.017546   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:42.018325   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:42.018325   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:42.018421   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:42.018421   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:42.021156   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:42.021156   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:42.021156   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:42.021156   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:42.021478   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:42.021478   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:42.021478   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:42 GMT
	I0419 18:58:42.021478   14960 round_trippers.go:580]     Audit-Id: 0faf17a2-ca47-45f9-9864-5845f3737a8d
	I0419 18:58:42.021917   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1901","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0419 18:58:42.500197   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:42.500197   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:42.500197   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:42.500197   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:42.505796   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:42.505851   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:42.505851   14960 round_trippers.go:580]     Audit-Id: 1b6a65bd-9010-4d7c-a24a-815e6aee4e0a
	I0419 18:58:42.505917   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:42.505917   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:42.505945   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:42.505945   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:42.505945   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:42 GMT
	I0419 18:58:42.506130   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:42.506770   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:42.506908   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:42.506908   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:42.506908   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:42.510541   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:42.510541   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:42.510541   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:42 GMT
	I0419 18:58:42.511068   14960 round_trippers.go:580]     Audit-Id: f1c31540-1623-47ca-816a-c285ef546234
	I0419 18:58:42.511068   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:42.511068   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:42.511068   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:42.511068   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:42.511294   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1901","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0419 18:58:42.511294   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:58:42.999996   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:42.999996   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:42.999996   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:42.999996   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:43.003904   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:43.003904   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:43.003904   14960 round_trippers.go:580]     Audit-Id: 38130497-b08a-4540-ab78-f0dcd04f45a0
	I0419 18:58:43.003904   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:43.003904   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:43.003904   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:43.003904   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:43.003904   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:43 GMT
	I0419 18:58:43.011664   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:43.012620   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:43.012681   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:43.012727   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:43.012727   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:43.015419   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:43.015904   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:43.015904   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:43.015904   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:43.015904   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:43.015904   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:43.015904   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:43 GMT
	I0419 18:58:43.015904   14960 round_trippers.go:580]     Audit-Id: 2ebc50f9-46dc-44f4-a06d-ef359a840493
	I0419 18:58:43.015904   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1901","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0419 18:58:43.502638   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:43.502737   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:43.502737   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:43.502737   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:43.507182   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:43.507182   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:43.507182   14960 round_trippers.go:580]     Audit-Id: 8a0bdd57-3646-41c8-985d-7cb28ad124d7
	I0419 18:58:43.507182   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:43.507322   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:43.507322   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:43.507322   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:43.507322   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:43 GMT
	I0419 18:58:43.507448   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:43.508200   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:43.508299   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:43.508299   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:43.508299   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:43.510746   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:43.511422   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:43.511422   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:43.511422   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:43.511422   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:43 GMT
	I0419 18:58:43.511422   14960 round_trippers.go:580]     Audit-Id: 037f0cdd-9853-44e6-8c3e-02d4eb9b0885
	I0419 18:58:43.511422   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:43.511422   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:43.511707   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:44.004357   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:44.004357   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:44.004357   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:44.004357   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:44.007999   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:44.007999   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:44.008965   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:44.008965   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:44 GMT
	I0419 18:58:44.008965   14960 round_trippers.go:580]     Audit-Id: f4d297f8-e5fa-4c02-8821-ec698c6c99ee
	I0419 18:58:44.009033   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:44.009033   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:44.009033   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:44.009341   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:44.009917   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:44.009917   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:44.009917   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:44.009917   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:44.012540   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:44.012540   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:44.012540   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:44.012540   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:44 GMT
	I0419 18:58:44.013586   14960 round_trippers.go:580]     Audit-Id: 85776f77-d624-4ff2-b744-427d4a063e7f
	I0419 18:58:44.013586   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:44.013586   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:44.013586   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:44.013908   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:44.506747   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:44.506849   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:44.506849   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:44.506985   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:44.510148   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:44.510988   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:44.510988   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:44.510988   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:44.510988   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:44.510988   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:44 GMT
	I0419 18:58:44.510988   14960 round_trippers.go:580]     Audit-Id: 15d31706-61ed-4941-be75-217af00039d1
	I0419 18:58:44.510988   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:44.511355   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:44.512387   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:44.512497   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:44.512497   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:44.512497   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:44.518174   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:44.518174   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:44.518174   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:44.518174   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:44.518174   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:44 GMT
	I0419 18:58:44.518174   14960 round_trippers.go:580]     Audit-Id: 2783d27d-2bb7-460d-8316-5c3bda8ca857
	I0419 18:58:44.518174   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:44.518174   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:44.518174   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:44.518940   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:58:45.003173   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:45.003173   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:45.003173   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:45.003173   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:45.008956   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:45.008956   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:45.008956   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:45.008956   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:45 GMT
	I0419 18:58:45.008956   14960 round_trippers.go:580]     Audit-Id: eda1ce7d-4833-4d57-8ce8-6f929e1ea4ff
	I0419 18:58:45.008956   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:45.008956   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:45.008956   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:45.009310   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:45.010135   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:45.010181   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:45.010181   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:45.010181   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:45.012222   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:45.012699   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:45.012699   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:45 GMT
	I0419 18:58:45.012699   14960 round_trippers.go:580]     Audit-Id: 7c7aad20-cf5d-40f8-9038-1f5df97ce0d4
	I0419 18:58:45.012699   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:45.012699   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:45.012770   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:45.012770   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:45.013048   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:45.507116   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:45.507116   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:45.507116   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:45.507116   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:45.510189   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:45.510553   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:45.510627   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:45.510627   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:45.510627   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:45 GMT
	I0419 18:58:45.510627   14960 round_trippers.go:580]     Audit-Id: 2459bd4a-db4e-4728-9b4c-98a1e8754a66
	I0419 18:58:45.510627   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:45.510627   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:45.510762   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:45.511687   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:45.511687   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:45.511738   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:45.511738   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:45.517981   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:58:45.518090   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:45.518090   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:45.518090   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:45 GMT
	I0419 18:58:45.518090   14960 round_trippers.go:580]     Audit-Id: ee6d2569-c663-440a-a654-db5cc68b697b
	I0419 18:58:45.518090   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:45.518152   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:45.518152   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:45.518402   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:46.010062   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:46.010186   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:46.010186   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:46.010186   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:46.013631   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:46.013631   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:46.013631   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:46.013631   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:46.013631   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:46.013631   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:46.013631   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:46 GMT
	I0419 18:58:46.014264   14960 round_trippers.go:580]     Audit-Id: 7bb04b11-ebe6-4689-8cd3-c36985f92408
	I0419 18:58:46.014451   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:46.014883   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:46.014883   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:46.014883   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:46.014883   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:46.018492   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:46.018492   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:46.018492   14960 round_trippers.go:580]     Audit-Id: cb2779a4-ccb7-4985-b8c2-9edd7fd289ee
	I0419 18:58:46.018492   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:46.018492   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:46.018492   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:46.018492   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:46.018492   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:46 GMT
	I0419 18:58:46.019069   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:46.511584   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:46.511584   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:46.511584   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:46.511584   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:46.516442   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:46.516442   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:46.516442   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:46.516442   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:46.516442   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:46.516442   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:46 GMT
	I0419 18:58:46.516442   14960 round_trippers.go:580]     Audit-Id: 0b0946b4-2809-423c-9544-fa5f379590c4
	I0419 18:58:46.516442   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:46.516746   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:46.517523   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:46.517628   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:46.517628   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:46.517628   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:46.520686   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:46.520686   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:46.521376   14960 round_trippers.go:580]     Audit-Id: d977d06f-9214-44cb-83b8-1c2718ecec88
	I0419 18:58:46.521376   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:46.521376   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:46.521376   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:46.521376   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:46.521376   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:46 GMT
	I0419 18:58:46.521563   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:46.522336   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:58:47.013684   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:47.013684   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:47.013684   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:47.013684   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:47.018307   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:47.018614   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:47.018614   14960 round_trippers.go:580]     Audit-Id: b17a08a2-deac-4e8a-80ca-3e0169e742b5
	I0419 18:58:47.018614   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:47.018614   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:47.018614   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:47.018614   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:47.018614   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:47 GMT
	I0419 18:58:47.018838   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:47.020013   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:47.020013   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:47.020013   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:47.020013   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:47.023303   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:47.023843   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:47.023843   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:47 GMT
	I0419 18:58:47.023843   14960 round_trippers.go:580]     Audit-Id: 86b68404-8558-403b-89b8-468e97477cbc
	I0419 18:58:47.023843   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:47.023843   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:47.023843   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:47.023843   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:47.024144   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:47.512418   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:47.512418   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:47.512514   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:47.512514   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:47.515892   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:47.516502   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:47.516502   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:47.516502   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:47.516502   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:47 GMT
	I0419 18:58:47.516502   14960 round_trippers.go:580]     Audit-Id: a3394d08-f24c-4e61-ab6d-0f3bd3e5b9ac
	I0419 18:58:47.516502   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:47.516502   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:47.516901   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:47.517624   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:47.517791   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:47.517791   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:47.517791   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:47.525683   14960 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 18:58:47.525683   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:47.525683   14960 round_trippers.go:580]     Audit-Id: 8598d40a-8430-4bf7-afe4-93f678b5c758
	I0419 18:58:47.525683   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:47.525683   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:47.525683   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:47.525683   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:47.525683   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:47 GMT
	I0419 18:58:47.525683   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:48.011314   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:48.011314   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:48.011395   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:48.011395   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:48.014911   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:48.014911   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:48.014911   14960 round_trippers.go:580]     Audit-Id: 1f5b3f54-4f6d-4d7a-941a-1bdba1686f07
	I0419 18:58:48.014911   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:48.015771   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:48.015771   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:48.015771   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:48.015771   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:48 GMT
	I0419 18:58:48.016059   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:48.016825   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:48.016825   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:48.016825   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:48.016825   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:48.019045   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:48.019045   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:48.019045   14960 round_trippers.go:580]     Audit-Id: 6019d7ca-c58e-4927-8795-94668e15ef17
	I0419 18:58:48.020060   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:48.020060   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:48.020060   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:48.020060   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:48.020060   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:48 GMT
	I0419 18:58:48.020434   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:48.513462   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:48.513462   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:48.513462   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:48.513462   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:48.520292   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:58:48.520850   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:48.520850   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:48.520850   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:48.520850   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:48 GMT
	I0419 18:58:48.520945   14960 round_trippers.go:580]     Audit-Id: 3b151ef3-62e5-4321-9357-841370841fd0
	I0419 18:58:48.520978   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:48.520978   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:48.520978   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:48.521802   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:48.521802   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:48.521802   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:48.521802   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:48.524516   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:48.524516   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:48.524516   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:48.524516   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:48 GMT
	I0419 18:58:48.524516   14960 round_trippers.go:580]     Audit-Id: 958b8c88-76b8-4622-b0e8-989840ad5c5c
	I0419 18:58:48.524516   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:48.524516   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:48.524516   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:48.526019   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:48.526564   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:58:49.010936   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:49.010936   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:49.010936   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:49.010936   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:49.014527   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:49.015578   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:49.015578   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:49.015578   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:49.015578   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:49 GMT
	I0419 18:58:49.015578   14960 round_trippers.go:580]     Audit-Id: c45c5942-1a77-4be5-b9ba-94f619bcde8f
	I0419 18:58:49.015578   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:49.015578   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:49.015578   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:49.016767   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:49.016832   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:49.016832   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:49.016832   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:49.020651   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:49.020651   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:49.020651   14960 round_trippers.go:580]     Audit-Id: aff4742f-c407-425a-b1bc-0d1a2f93d69a
	I0419 18:58:49.020651   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:49.020651   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:49.020651   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:49.020841   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:49.020841   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:49 GMT
	I0419 18:58:49.020992   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:49.509753   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:49.509936   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:49.509936   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:49.509936   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:49.513498   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:49.514526   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:49.514570   14960 round_trippers.go:580]     Audit-Id: 0758c54c-524b-4e9f-8a09-9e995f3075fc
	I0419 18:58:49.514681   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:49.514681   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:49.514681   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:49.514681   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:49.514681   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:49 GMT
	I0419 18:58:49.514928   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:49.515749   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:49.515749   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:49.515749   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:49.515835   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:49.518499   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:49.518499   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:49.518499   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:49 GMT
	I0419 18:58:49.518688   14960 round_trippers.go:580]     Audit-Id: a7e0c727-aee0-40fc-a1ae-9030dee06eda
	I0419 18:58:49.518688   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:49.518688   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:49.518688   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:49.518688   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:49.519180   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:50.009571   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:50.009571   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:50.009571   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:50.009571   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:50.014182   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:50.014182   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:50.014182   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:50.014182   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:50.014308   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:50.014308   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:50 GMT
	I0419 18:58:50.014308   14960 round_trippers.go:580]     Audit-Id: 0fd45140-7851-441f-ad86-173b46e5e47e
	I0419 18:58:50.014308   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:50.014375   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:50.015446   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:50.015446   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:50.015446   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:50.015542   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:50.017880   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:50.018880   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:50.018880   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:50.018880   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:50.018880   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:50.018880   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:50 GMT
	I0419 18:58:50.018880   14960 round_trippers.go:580]     Audit-Id: 43dbd2b3-4e19-4ec0-b0ea-7ee0ba70a166
	I0419 18:58:50.018880   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:50.018880   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:50.506222   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:50.506493   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:50.506493   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:50.506493   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:50.509930   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:50.509930   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:50.509930   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:50.509930   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:50 GMT
	I0419 18:58:50.509930   14960 round_trippers.go:580]     Audit-Id: fef51eaa-9269-49ed-a54a-e069f1402030
	I0419 18:58:50.509930   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:50.510919   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:50.510919   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:50.511110   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:50.511966   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:50.512035   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:50.512035   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:50.512035   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:50.514699   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:50.514699   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:50.514699   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:50 GMT
	I0419 18:58:50.515194   14960 round_trippers.go:580]     Audit-Id: dcdd70dc-a934-4a77-b83f-7520e2e9e133
	I0419 18:58:50.515194   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:50.515194   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:50.515194   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:50.515301   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:50.515515   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:51.004734   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:51.004734   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:51.004734   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:51.004734   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:51.008744   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:51.009549   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:51.009549   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:51.009549   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:51.009549   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:51.009549   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:51 GMT
	I0419 18:58:51.009549   14960 round_trippers.go:580]     Audit-Id: b617140e-68e1-47b8-b2b4-111f39118d39
	I0419 18:58:51.009640   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:51.009890   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:51.010995   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:51.011081   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:51.011081   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:51.011081   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:51.015033   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:51.015033   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:51.015033   14960 round_trippers.go:580]     Audit-Id: 7e7a4518-de21-40dc-8993-d243bb1dd849
	I0419 18:58:51.015033   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:51.015033   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:51.015223   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:51.015223   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:51.015223   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:51 GMT
	I0419 18:58:51.015223   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:51.016160   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:58:51.503559   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:51.503559   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:51.503559   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:51.503559   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:51.507167   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:51.507167   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:51.508146   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:51.508169   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:51.508169   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:51.508169   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:51 GMT
	I0419 18:58:51.508169   14960 round_trippers.go:580]     Audit-Id: 1beb3eff-e1fb-4c08-89da-b3aac0f1124a
	I0419 18:58:51.508169   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:51.508756   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:51.509545   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:51.509681   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:51.509681   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:51.509681   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:51.513021   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:51.513021   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:51.513021   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:51 GMT
	I0419 18:58:51.513021   14960 round_trippers.go:580]     Audit-Id: 3aa074e9-c0f6-40fc-ad77-6a5f48c89484
	I0419 18:58:51.513021   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:51.513021   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:51.513021   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:51.513021   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:51.513581   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:52.001462   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:52.001462   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:52.001462   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:52.001462   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:52.005136   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:52.005136   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:52.005136   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:52.005136   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:52.005136   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:52.005136   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:52.005136   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:52 GMT
	I0419 18:58:52.005136   14960 round_trippers.go:580]     Audit-Id: 97d85d44-715c-416f-810a-0faddabd4dfd
	I0419 18:58:52.005136   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:52.006791   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:52.006928   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:52.006928   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:52.007012   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:52.010365   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:52.010365   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:52.010365   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:52.010365   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:52.010365   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:52.010365   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:52 GMT
	I0419 18:58:52.010365   14960 round_trippers.go:580]     Audit-Id: 25ac4cba-a0de-4b5f-9a68-4919db795540
	I0419 18:58:52.010365   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:52.010365   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:52.499815   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:52.499868   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:52.499868   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:52.499868   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:52.503714   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:52.504370   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:52.504370   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:52.504370   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:52 GMT
	I0419 18:58:52.504370   14960 round_trippers.go:580]     Audit-Id: 357cd200-9af7-4b5d-97e9-224d193eae73
	I0419 18:58:52.504370   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:52.504370   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:52.504370   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:52.504624   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:52.505018   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:52.505018   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:52.505018   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:52.505018   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:52.510855   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:52.510855   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:52.510855   14960 round_trippers.go:580]     Audit-Id: 749b5414-bf8c-45d2-9622-49bec90f465e
	I0419 18:58:52.510855   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:52.510855   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:52.510855   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:52.510855   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:52.510855   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:52 GMT
	I0419 18:58:52.510855   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:53.002426   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:53.002471   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:53.002471   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:53.002471   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:53.006426   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:53.007129   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:53.007203   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:53 GMT
	I0419 18:58:53.007203   14960 round_trippers.go:580]     Audit-Id: 491108d3-e699-4762-b791-1915b7fcb83b
	I0419 18:58:53.007203   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:53.007203   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:53.007203   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:53.007203   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:53.007524   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:53.007746   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:53.007746   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:53.007746   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:53.007746   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:53.010479   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:53.010479   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:53.010479   14960 round_trippers.go:580]     Audit-Id: 2aa824ef-1ff1-4806-aeb2-492a07079c6e
	I0419 18:58:53.010479   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:53.010479   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:53.011506   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:53.011506   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:53.011506   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:53 GMT
	I0419 18:58:53.011667   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:53.500307   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:53.500307   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:53.500307   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:53.500406   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:53.504977   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:53.504977   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:53.504977   14960 round_trippers.go:580]     Audit-Id: e410c4e5-a8f0-46c8-8624-ce0c1ee8eb22
	I0419 18:58:53.504977   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:53.505065   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:53.505065   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:53.505065   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:53.505065   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:53 GMT
	I0419 18:58:53.505322   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:53.506213   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:53.506213   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:53.506279   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:53.506279   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:53.508662   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:53.508662   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:53.508662   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:53.508662   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:53 GMT
	I0419 18:58:53.508662   14960 round_trippers.go:580]     Audit-Id: 08d64dc2-74ca-4e2d-b9ef-cdb78bdd3955
	I0419 18:58:53.508662   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:53.508662   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:53.508662   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:53.513178   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:53.513178   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:58:53.999467   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:53.999467   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:53.999467   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:53.999467   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:54.004414   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:54.004414   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:54.004520   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:54.004520   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:54.004520   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:54.004520   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:54.004520   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:54 GMT
	I0419 18:58:54.004520   14960 round_trippers.go:580]     Audit-Id: e7f4d3c5-a75d-4481-8d83-997ff25b7c09
	I0419 18:58:54.005521   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:54.006662   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:54.006662   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:54.006662   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:54.006662   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:54.010065   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:54.010065   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:54.010065   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:54 GMT
	I0419 18:58:54.010337   14960 round_trippers.go:580]     Audit-Id: 7f9704e7-cd3f-4a03-8bf0-118a39946eba
	I0419 18:58:54.010337   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:54.010337   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:54.010337   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:54.010337   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:54.010701   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:54.513136   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:54.513367   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:54.513367   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:54.513367   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:54.518936   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:54.518936   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:54.518936   14960 round_trippers.go:580]     Audit-Id: b11c20dc-7692-4af5-b5e2-bb3be0ead9d6
	I0419 18:58:54.519494   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:54.519494   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:54.519494   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:54.519494   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:54.519494   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:54 GMT
	I0419 18:58:54.519705   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:54.520373   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:54.520492   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:54.520492   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:54.520492   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:54.523846   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:54.524197   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:54.524197   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:54.524197   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:54.524243   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:54.524243   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:54.524243   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:54 GMT
	I0419 18:58:54.524243   14960 round_trippers.go:580]     Audit-Id: 0bc51c75-d9cf-4afe-94a8-6b8abe378ab6
	I0419 18:58:54.524275   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:55.012471   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:55.012471   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:55.012471   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:55.012471   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:55.016049   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:55.016049   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:55.016049   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:55.016049   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:55 GMT
	I0419 18:58:55.016049   14960 round_trippers.go:580]     Audit-Id: 4d7a9db6-2056-402f-a1a5-137ce4c25d84
	I0419 18:58:55.016049   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:55.016630   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:55.016630   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:55.017587   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:55.018280   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:55.018280   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:55.018280   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:55.018280   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:55.020867   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:55.020867   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:55.020867   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:55.020867   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:55.020867   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:55.020867   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:55 GMT
	I0419 18:58:55.020867   14960 round_trippers.go:580]     Audit-Id: d1059101-eeb0-4cb2-b0f8-f0d0c7d9ef99
	I0419 18:58:55.020867   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:55.021669   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:55.501684   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:55.501791   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:55.501791   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:55.501791   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:55.505144   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:55.505557   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:55.505557   14960 round_trippers.go:580]     Audit-Id: 55c7be2a-6365-46ce-8f95-91a0a2e67773
	I0419 18:58:55.505557   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:55.505557   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:55.505557   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:55.505557   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:55.505557   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:55 GMT
	I0419 18:58:55.505967   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:55.506845   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:55.506845   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:55.506845   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:55.506845   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:55.511660   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:55.512186   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:55.512186   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:55.512186   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:55 GMT
	I0419 18:58:55.512186   14960 round_trippers.go:580]     Audit-Id: d0a750ff-7bdd-4061-af4d-4b88893a553f
	I0419 18:58:55.512186   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:55.512186   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:55.512186   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:55.512383   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:56.001930   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:56.002032   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:56.002032   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:56.002032   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:56.005975   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:56.006296   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:56.006397   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:56.006397   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:56 GMT
	I0419 18:58:56.006397   14960 round_trippers.go:580]     Audit-Id: d81c54bc-a6c1-48cc-ac44-3b9cdeea4d7f
	I0419 18:58:56.006397   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:56.006397   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:56.006397   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:56.006542   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:56.007566   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:56.007566   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:56.007651   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:56.007651   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:56.010789   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:56.010900   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:56.010939   14960 round_trippers.go:580]     Audit-Id: c8baf146-e2ca-4588-ae9c-09d2a23ce8f7
	I0419 18:58:56.010939   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:56.010939   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:56.010939   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:56.010986   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:56.010986   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:56 GMT
	I0419 18:58:56.011439   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:56.011439   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:58:56.500228   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:56.500506   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:56.500506   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:56.500506   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:56.505887   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:56.506799   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:56.506799   14960 round_trippers.go:580]     Audit-Id: 936953f8-729f-4bfd-9e01-b52403f31203
	I0419 18:58:56.506799   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:56.506799   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:56.506799   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:56.506875   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:56.506875   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:56 GMT
	I0419 18:58:56.507073   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:56.507886   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:56.507972   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:56.507972   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:56.507972   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:56.510970   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:56.511837   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:56.511837   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:56.511837   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:56.511837   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:56.511837   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:56.511837   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:56 GMT
	I0419 18:58:56.511837   14960 round_trippers.go:580]     Audit-Id: c76c2af0-fbdf-46da-96be-e3956002b641
	I0419 18:58:56.512168   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:57.012579   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:57.012579   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:57.012579   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:57.012579   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:57.016180   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:57.016862   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:57.016862   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:57.016862   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:57.016926   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:57.016926   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:57 GMT
	I0419 18:58:57.016926   14960 round_trippers.go:580]     Audit-Id: 50ef15e1-0670-4d21-9630-0ffec2d58ff7
	I0419 18:58:57.016926   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:57.017186   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:57.017506   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:57.017506   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:57.017506   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:57.017506   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:57.023081   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:58:57.023081   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:57.023081   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:57.023081   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:57 GMT
	I0419 18:58:57.023081   14960 round_trippers.go:580]     Audit-Id: 0aeb00b0-ba93-4b86-b408-2259ba7d36f9
	I0419 18:58:57.023081   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:57.023081   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:57.023081   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:57.023081   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:57.511101   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:57.511101   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:57.511242   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:57.511242   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:57.515145   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:57.515876   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:57.515876   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:57.515876   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:57.515876   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:57.515876   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:57 GMT
	I0419 18:58:57.515876   14960 round_trippers.go:580]     Audit-Id: a817d97f-3c7f-44dc-9116-54701b724a43
	I0419 18:58:57.515876   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:57.516041   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:57.516930   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:57.517010   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:57.517010   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:57.517010   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:57.520227   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:57.520227   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:57.520227   14960 round_trippers.go:580]     Audit-Id: 959dc126-1d59-416b-adee-c94f879a422b
	I0419 18:58:57.520527   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:57.520527   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:57.520527   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:57.520527   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:57.520527   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:57 GMT
	I0419 18:58:57.520637   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:58.012663   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:58.012663   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:58.012663   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:58.012663   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:58.018936   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:58:58.018936   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:58.018936   14960 round_trippers.go:580]     Audit-Id: 130eded6-9525-4de3-b78c-80914fe8c554
	I0419 18:58:58.019557   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:58.019557   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:58.019557   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:58.019557   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:58.019557   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:58 GMT
	I0419 18:58:58.019875   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:58.020687   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:58.020687   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:58.020687   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:58.020687   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:58.023623   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:58.023623   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:58.023623   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:58.023986   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:58.023986   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:58 GMT
	I0419 18:58:58.023986   14960 round_trippers.go:580]     Audit-Id: 8bff6465-a344-4dfc-89cc-23f13cbd1eab
	I0419 18:58:58.023986   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:58.023986   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:58.024360   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:58.024849   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:58:58.499426   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:58.499426   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:58.499521   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:58.499521   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:58.504473   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:58.504473   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:58.504473   14960 round_trippers.go:580]     Audit-Id: f62344a5-d5b7-4b71-833e-99f5c94e9df7
	I0419 18:58:58.504536   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:58.504536   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:58.504536   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:58.504536   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:58.504536   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:58 GMT
	I0419 18:58:58.504744   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:58.505825   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:58.505877   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:58.505877   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:58.505877   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:58.513362   14960 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 18:58:58.513362   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:58.513362   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:58.513362   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:58.513362   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:58.513362   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:58.513362   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:58 GMT
	I0419 18:58:58.513362   14960 round_trippers.go:580]     Audit-Id: 5c7ff8af-1fc8-4c46-9775-bd89bb824d2c
	I0419 18:58:58.515125   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:59.014064   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:59.014064   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:59.014064   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:59.014064   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:59.018725   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:58:59.018725   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:59.018725   14960 round_trippers.go:580]     Audit-Id: 6db8b4c9-bce4-493b-98ce-3e79fb242698
	I0419 18:58:59.019579   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:59.019579   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:59.019579   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:59.019579   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:59.019579   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:59 GMT
	I0419 18:58:59.019831   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:59.020334   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:59.020334   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:59.020334   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:59.020334   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:59.023228   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:59.023228   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:59.023228   14960 round_trippers.go:580]     Audit-Id: 1176868a-c5a1-4b96-a026-1e089fa39aed
	I0419 18:58:59.023228   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:59.023228   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:59.023228   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:59.024229   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:59.024229   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:59 GMT
	I0419 18:58:59.024570   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:58:59.501412   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:58:59.501412   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:59.501412   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:59.501412   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:59.505091   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:58:59.505091   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:59.505520   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:59.505520   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:59.505520   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:59.505520   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:59 GMT
	I0419 18:58:59.505520   14960 round_trippers.go:580]     Audit-Id: e1bb8c4f-ef14-41ad-9636-e3c6440e65b9
	I0419 18:58:59.505520   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:59.505642   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:58:59.506293   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:58:59.506293   14960 round_trippers.go:469] Request Headers:
	I0419 18:58:59.506293   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:58:59.506293   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:58:59.508922   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:58:59.508922   14960 round_trippers.go:577] Response Headers:
	I0419 18:58:59.509918   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:58:59 GMT
	I0419 18:58:59.509918   14960 round_trippers.go:580]     Audit-Id: e4c4c918-1ac1-4da5-b401-a418aa104662
	I0419 18:58:59.509918   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:58:59.509961   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:58:59.509961   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:58:59.509961   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:58:59.510315   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:00.003096   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:00.003266   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:00.003266   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:00.003266   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:00.006954   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:00.007729   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:00.007729   14960 round_trippers.go:580]     Audit-Id: 74b62b74-b90c-4ac0-8dfe-0b106b05cf3e
	I0419 18:59:00.007729   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:00.007729   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:00.007729   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:00.007729   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:00.007729   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:00 GMT
	I0419 18:59:00.007791   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:00.008661   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:00.008759   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:00.008916   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:00.008916   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:00.012123   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:00.012123   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:00.012123   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:00 GMT
	I0419 18:59:00.012123   14960 round_trippers.go:580]     Audit-Id: deeb82b0-7ce0-406e-90dc-9d4d63109604
	I0419 18:59:00.012123   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:00.012123   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:00.012123   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:00.012123   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:00.012785   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:00.502120   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:00.502120   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:00.502120   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:00.502120   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:00.507096   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:00.507096   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:00.507096   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:00.507096   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:00.507096   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:00.507096   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:00 GMT
	I0419 18:59:00.507096   14960 round_trippers.go:580]     Audit-Id: 567592b3-545b-42a2-aab3-6b51f08293c5
	I0419 18:59:00.507096   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:00.507096   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:00.508181   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:00.508265   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:00.508265   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:00.508338   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:00.510672   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:00.511131   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:00.511131   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:00.511131   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:00 GMT
	I0419 18:59:00.511131   14960 round_trippers.go:580]     Audit-Id: 777797ae-d162-4ff7-9caf-e4b150a5facc
	I0419 18:59:00.511131   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:00.511131   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:00.511215   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:00.511522   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:00.511988   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:59:01.004612   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:01.004867   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:01.004867   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:01.004867   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:01.011467   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:59:01.011467   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:01.011467   14960 round_trippers.go:580]     Audit-Id: 7975e44c-afd5-460e-9a1e-ea016ce20729
	I0419 18:59:01.011467   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:01.011467   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:01.011467   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:01.011467   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:01.011467   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:01 GMT
	I0419 18:59:01.011467   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:01.012210   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:01.012210   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:01.012210   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:01.012210   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:01.015558   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:01.015558   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:01.015558   14960 round_trippers.go:580]     Audit-Id: 16f05b91-af11-4637-9245-77432f3b03e1
	I0419 18:59:01.015558   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:01.015558   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:01.015558   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:01.015558   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:01.015558   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:01 GMT
	I0419 18:59:01.016379   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:01.501671   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:01.501671   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:01.501671   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:01.501671   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:01.507260   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:59:01.507449   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:01.507449   14960 round_trippers.go:580]     Audit-Id: 413b0082-06d6-4dff-b059-cd2c76d48f3f
	I0419 18:59:01.507449   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:01.507449   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:01.507449   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:01.507449   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:01.507449   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:01 GMT
	I0419 18:59:01.507549   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:01.507549   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:01.507549   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:01.507549   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:01.507549   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:01.511560   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:01.511560   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:01.511560   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:01.511560   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:01 GMT
	I0419 18:59:01.511560   14960 round_trippers.go:580]     Audit-Id: db3a1cf5-3ab6-4d48-9e8f-24b32c8c05f8
	I0419 18:59:01.511805   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:01.511805   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:01.511805   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:01.512126   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:02.005151   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:02.005151   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:02.005229   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:02.005229   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:02.009102   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:02.009832   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:02.009832   14960 round_trippers.go:580]     Audit-Id: c98490b3-67f7-4399-9985-994bd877d913
	I0419 18:59:02.009832   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:02.009832   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:02.009832   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:02.009832   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:02.009832   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:02 GMT
	I0419 18:59:02.010055   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:02.010974   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:02.010974   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:02.010974   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:02.011096   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:02.016006   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:02.016006   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:02.016006   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:02.016006   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:02.016006   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:02 GMT
	I0419 18:59:02.016006   14960 round_trippers.go:580]     Audit-Id: 69fbbef0-1c12-411e-ad75-3d4dd969686b
	I0419 18:59:02.016006   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:02.016006   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:02.016717   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:02.501212   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:02.501468   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:02.501468   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:02.501468   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:02.506831   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:59:02.506831   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:02.506831   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:02 GMT
	I0419 18:59:02.506831   14960 round_trippers.go:580]     Audit-Id: 96e4394e-e07b-401a-b6e0-28622a2d3e86
	I0419 18:59:02.506831   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:02.506831   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:02.506831   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:02.506831   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:02.506831   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:02.508176   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:02.508176   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:02.508176   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:02.508176   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:02.510748   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:02.510748   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:02.511238   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:02.511238   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:02.511238   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:02 GMT
	I0419 18:59:02.511238   14960 round_trippers.go:580]     Audit-Id: 69dedb6c-2273-48cb-8532-f6ee18a4281b
	I0419 18:59:02.511238   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:02.511238   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:02.511312   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:02.512010   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:59:03.007472   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:03.007472   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:03.007472   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:03.007472   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:03.011089   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:03.011089   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:03.011089   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:03.011516   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:03.011516   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:03.011516   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:03.011516   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:03 GMT
	I0419 18:59:03.011516   14960 round_trippers.go:580]     Audit-Id: e99ebeca-6c53-4334-8236-77a3efe1afe6
	I0419 18:59:03.011575   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:03.012551   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:03.012626   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:03.012626   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:03.012626   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:03.015397   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:03.015397   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:03.016223   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:03.016223   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:03 GMT
	I0419 18:59:03.016223   14960 round_trippers.go:580]     Audit-Id: a8b578ac-9797-4389-a763-7d529c019a00
	I0419 18:59:03.016223   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:03.016223   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:03.016223   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:03.017085   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:03.510406   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:03.510503   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:03.510503   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:03.510503   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:03.513905   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:03.513905   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:03.513905   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:03 GMT
	I0419 18:59:03.513905   14960 round_trippers.go:580]     Audit-Id: b606eb40-7df5-4a63-8177-5657c6f57692
	I0419 18:59:03.514866   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:03.514866   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:03.514866   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:03.514866   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:03.515118   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:03.515783   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:03.515783   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:03.515783   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:03.515783   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:03.519128   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:03.519128   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:03.519220   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:03.519220   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:03.519220   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:03 GMT
	I0419 18:59:03.519220   14960 round_trippers.go:580]     Audit-Id: b769c591-b163-447c-90bc-1092ce12dddc
	I0419 18:59:03.519220   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:03.519220   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:03.519553   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:04.011815   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:04.011942   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:04.011942   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:04.011942   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:04.015874   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:04.016653   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:04.016653   14960 round_trippers.go:580]     Audit-Id: 4dab5c08-30f7-464c-842b-06d5e943f8a6
	I0419 18:59:04.016653   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:04.016653   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:04.016653   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:04.016787   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:04.016787   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:04 GMT
	I0419 18:59:04.016960   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:04.017746   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:04.017746   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:04.017851   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:04.017851   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:04.024090   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:59:04.024647   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:04.024647   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:04.024647   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:04 GMT
	I0419 18:59:04.024647   14960 round_trippers.go:580]     Audit-Id: 4e67dd41-3f9a-4588-8f66-5321d63c9bc8
	I0419 18:59:04.024697   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:04.024697   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:04.024697   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:04.024896   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:04.510605   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:04.510605   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:04.510605   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:04.510605   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:04.515215   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:04.515451   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:04.515451   14960 round_trippers.go:580]     Audit-Id: 0e87fa54-ba63-490e-8b3d-9f9734f6ff85
	I0419 18:59:04.515451   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:04.515451   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:04.515451   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:04.515451   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:04.515451   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:04 GMT
	I0419 18:59:04.515924   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:04.516701   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:04.516701   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:04.516701   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:04.516701   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:04.520034   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:04.520034   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:04.520358   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:04 GMT
	I0419 18:59:04.520358   14960 round_trippers.go:580]     Audit-Id: 83f30d46-0c5a-4fea-a2c6-276a2c6ab27b
	I0419 18:59:04.520358   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:04.520358   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:04.520358   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:04.520358   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:04.520819   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:04.521162   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:59:05.010123   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:05.010123   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:05.010123   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:05.010123   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:05.013667   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:05.013667   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:05.013667   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:05.013667   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:05.013667   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:05 GMT
	I0419 18:59:05.013667   14960 round_trippers.go:580]     Audit-Id: 3a9ef89a-2e33-4597-918b-23dede77582f
	I0419 18:59:05.013667   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:05.013667   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:05.014326   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:05.015088   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:05.015088   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:05.015088   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:05.015088   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:05.019417   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:05.019417   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:05.019417   14960 round_trippers.go:580]     Audit-Id: a484465a-6a6e-4370-be20-d59692ad3e71
	I0419 18:59:05.019417   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:05.019417   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:05.019417   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:05.019417   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:05.019417   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:05 GMT
	I0419 18:59:05.019417   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:05.498802   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:05.498802   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:05.498802   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:05.498802   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:05.503727   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:05.504233   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:05.504233   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:05.504278   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:05.504278   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:05 GMT
	I0419 18:59:05.504278   14960 round_trippers.go:580]     Audit-Id: 4ba016da-dbb8-4f20-910f-365004ca45f8
	I0419 18:59:05.504278   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:05.504278   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:05.504420   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:05.505404   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:05.505442   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:05.505442   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:05.505442   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:05.511078   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:59:05.511078   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:05.511078   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:05.511078   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:05 GMT
	I0419 18:59:05.511078   14960 round_trippers.go:580]     Audit-Id: a4332a83-aa62-4b96-aef0-b62a80262f9c
	I0419 18:59:05.511078   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:05.511078   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:05.511078   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:05.512058   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:06.007697   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:06.007697   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:06.007697   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:06.007697   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:06.011287   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:06.011416   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:06.011478   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:06.011478   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:06.011478   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:06.011478   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:06 GMT
	I0419 18:59:06.011478   14960 round_trippers.go:580]     Audit-Id: 2115d770-5519-4034-bc3a-b8952ec7043a
	I0419 18:59:06.011478   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:06.011767   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:06.012396   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:06.012396   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:06.012523   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:06.012523   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:06.014786   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:06.014786   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:06.014786   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:06.014786   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:06 GMT
	I0419 18:59:06.014786   14960 round_trippers.go:580]     Audit-Id: d588e780-f10e-4fe6-a4bd-fc9d41dd1d91
	I0419 18:59:06.014786   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:06.014786   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:06.014786   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:06.015662   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:06.513465   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:06.513465   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:06.513571   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:06.513571   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:06.517226   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:06.517226   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:06.517296   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:06 GMT
	I0419 18:59:06.517296   14960 round_trippers.go:580]     Audit-Id: f8b2f3c4-a973-4dea-8b42-717975851e34
	I0419 18:59:06.517296   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:06.517296   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:06.517296   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:06.517296   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:06.517611   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:06.518382   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:06.518471   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:06.518471   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:06.518471   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:06.520951   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:06.520951   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:06.520951   14960 round_trippers.go:580]     Audit-Id: 302db986-dfa0-4b81-9f04-fcba2af125c2
	I0419 18:59:06.520951   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:06.521259   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:06.521259   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:06.521259   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:06.521259   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:06 GMT
	I0419 18:59:06.521885   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:06.522496   14960 pod_ready.go:102] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"False"
	I0419 18:59:07.014055   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:07.014140   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.014140   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.014140   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.020420   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 18:59:07.020420   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.020420   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.020882   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.020882   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.020882   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.020882   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.020882   14960 round_trippers.go:580]     Audit-Id: f0709ebb-18c8-4915-a343-02786ccbfac4
	I0419 18:59:07.021124   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1743","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0419 18:59:07.021942   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:07.021999   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.021999   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.022058   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.033923   14960 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0419 18:59:07.033983   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.033983   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.033983   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.033983   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.033983   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.033983   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.033983   14960 round_trippers.go:580]     Audit-Id: 18eabf05-5209-425c-b1c1-8b00846a50c2
	I0419 18:59:07.034559   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:07.502757   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 18:59:07.502961   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.502961   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.503039   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.508018   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:07.508018   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.508018   14960 round_trippers.go:580]     Audit-Id: f5901ff3-df7c-45fa-9dec-750a43541171
	I0419 18:59:07.508018   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.508018   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.508018   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.508018   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.508018   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.508018   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1944","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6786 chars]
	I0419 18:59:07.509050   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:07.509129   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.509129   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.509129   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.511296   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:07.512289   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.512327   14960 round_trippers.go:580]     Audit-Id: 696e3ece-e5f2-482b-b3fa-b066333e9c70
	I0419 18:59:07.512327   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.512327   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.512327   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.512327   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.512327   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.512598   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:07.513028   14960 pod_ready.go:92] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"True"
	I0419 18:59:07.513028   14960 pod_ready.go:81] duration metric: took 27.0146735s for pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:07.513028   14960 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:07.513142   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-348000
	I0419 18:59:07.513142   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.513225   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.513225   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.518561   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:59:07.518657   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.518657   14960 round_trippers.go:580]     Audit-Id: 420728a6-e4d0-4d9a-a9bc-15f5b1b59d30
	I0419 18:59:07.518657   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.518657   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.518657   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.518657   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.518739   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.519500   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-348000","namespace":"kube-system","uid":"33702588-cdf3-4577-b18d-18415cca2c25","resourceVersion":"1836","creationTimestamp":"2024-04-20T01:58:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.42.24:2379","kubernetes.io/config.hash":"c0cfa3da6a3913c3e67500f6c3e9d72b","kubernetes.io/config.mirror":"c0cfa3da6a3913c3e67500f6c3e9d72b","kubernetes.io/config.seen":"2024-04-20T01:57:55.099346749Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:58:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6149 chars]
	I0419 18:59:07.519550   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:07.519550   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.519550   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.519550   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.522314   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:07.522314   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.522314   14960 round_trippers.go:580]     Audit-Id: 668d1d46-9b89-4c7a-a9be-d01ff8dd8d6d
	I0419 18:59:07.523331   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.523331   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.523331   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.523331   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.523331   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.523331   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:07.523331   14960 pod_ready.go:92] pod "etcd-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 18:59:07.524113   14960 pod_ready.go:81] duration metric: took 10.303ms for pod "etcd-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:07.524146   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:07.524146   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-348000
	I0419 18:59:07.524146   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.524146   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.524146   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.526631   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:07.526631   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.526631   14960 round_trippers.go:580]     Audit-Id: 8583e14e-6dea-4103-800e-098537e0117a
	I0419 18:59:07.526631   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.526631   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.526631   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.526631   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.526631   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.527729   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-348000","namespace":"kube-system","uid":"13adbf1b-6c17-47a9-951d-2481680a47bd","resourceVersion":"1823","creationTimestamp":"2024-04-20T01:58:01Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.42.24:8443","kubernetes.io/config.hash":"af7a3c9321ace7e2a933260472b90113","kubernetes.io/config.mirror":"af7a3c9321ace7e2a933260472b90113","kubernetes.io/config.seen":"2024-04-20T01:57:55.026086199Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:58:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7685 chars]
	I0419 18:59:07.528175   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:07.528175   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.528175   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.528175   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.530806   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:07.530806   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.530806   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.530806   14960 round_trippers.go:580]     Audit-Id: bb69a5d2-e9e3-4b6c-969a-63c6433f4821
	I0419 18:59:07.530806   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.530806   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.530806   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.530806   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.530806   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:07.530806   14960 pod_ready.go:92] pod "kube-apiserver-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 18:59:07.530806   14960 pod_ready.go:81] duration metric: took 6.6602ms for pod "kube-apiserver-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:07.530806   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:07.532201   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-348000
	I0419 18:59:07.532201   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.532201   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.532332   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.535080   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:07.535080   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.535080   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.536048   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.536048   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.536048   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.536048   14960 round_trippers.go:580]     Audit-Id: 38701ef6-d4e6-4688-8eab-6aaad79aa8e5
	I0419 18:59:07.536048   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.536419   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-348000","namespace":"kube-system","uid":"299bb088-9795-4452-87a8-5e96bcacedde","resourceVersion":"1829","creationTimestamp":"2024-04-20T01:35:08Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"30aa2729d0c65b9f89e1ae2d151edd9b","kubernetes.io/config.mirror":"30aa2729d0c65b9f89e1ae2d151edd9b","kubernetes.io/config.seen":"2024-04-20T01:35:08.321898260Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0419 18:59:07.537180   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:07.537180   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.537231   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.537231   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.539482   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 18:59:07.539482   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.539482   14960 round_trippers.go:580]     Audit-Id: 179cb76c-c5c9-4176-a360-e036f1c8f798
	I0419 18:59:07.539482   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.539482   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.539482   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.539482   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.539482   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.539482   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:07.539482   14960 pod_ready.go:92] pod "kube-controller-manager-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 18:59:07.539482   14960 pod_ready.go:81] duration metric: took 7.2809ms for pod "kube-controller-manager-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:07.539482   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2jjsq" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:07.539482   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2jjsq
	I0419 18:59:07.539482   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.540493   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.540535   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.542270   14960 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 18:59:07.542270   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.542270   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.542270   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.542270   14960 round_trippers.go:580]     Audit-Id: 9c19064f-4110-482a-9b33-bdb23bb21ff0
	I0419 18:59:07.542270   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.542270   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.543246   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.544226   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2jjsq","generateName":"kube-proxy-","namespace":"kube-system","uid":"f9666ab7-0d1f-4800-b979-6e38fecdc518","resourceVersion":"1708","creationTimestamp":"2024-04-20T01:42:52Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:42:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0419 18:59:07.544899   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m03
	I0419 18:59:07.544978   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.544978   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.545059   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.546925   14960 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 18:59:07.546925   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.546925   14960 round_trippers.go:580]     Audit-Id: 2f6646c5-bdcd-4060-b3dc-3f276a83411d
	I0419 18:59:07.546925   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.546925   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.546925   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.547947   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.547947   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.548092   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m03","uid":"08bfca2d-b382-4052-a5b6-0a78bee7caef","resourceVersion":"1871","creationTimestamp":"2024-04-20T01:53:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_53_29_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:53:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4398 chars]
	I0419 18:59:07.548536   14960 pod_ready.go:97] node "multinode-348000-m03" hosting pod "kube-proxy-2jjsq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000-m03" has status "Ready":"Unknown"
	I0419 18:59:07.548536   14960 pod_ready.go:81] duration metric: took 9.0538ms for pod "kube-proxy-2jjsq" in "kube-system" namespace to be "Ready" ...
	E0419 18:59:07.548536   14960 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348000-m03" hosting pod "kube-proxy-2jjsq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000-m03" has status "Ready":"Unknown"
	I0419 18:59:07.548536   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bjv9b" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:07.705114   14960 request.go:629] Waited for 156.4717ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bjv9b
	I0419 18:59:07.705326   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bjv9b
	I0419 18:59:07.705391   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.705391   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.705430   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.709801   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:07.709801   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.709801   14960 round_trippers.go:580]     Audit-Id: f15fc53e-6021-4d4f-ba7b-a7acaae73a3a
	I0419 18:59:07.709801   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.709801   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.710149   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.710149   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.710149   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.710832   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bjv9b","generateName":"kube-proxy-","namespace":"kube-system","uid":"3e909d14-543a-4734-8c17-7e2b8188553d","resourceVersion":"1918","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0419 18:59:07.908329   14960 request.go:629] Waited for 196.2638ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:59:07.908646   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 18:59:07.908646   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:07.908646   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:07.908646   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:07.913701   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:07.913789   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:07.913789   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:07.913789   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:07.913789   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:07.913877   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:07 GMT
	I0419 18:59:07.913877   14960 round_trippers.go:580]     Audit-Id: eec667c0-5f4b-4396-b538-1a02bb301448
	I0419 18:59:07.913877   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:07.913877   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608","resourceVersion":"1930","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_38_19_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4582 chars]
	I0419 18:59:07.914762   14960 pod_ready.go:97] node "multinode-348000-m02" hosting pod "kube-proxy-bjv9b" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000-m02" has status "Ready":"Unknown"
	I0419 18:59:07.914762   14960 pod_ready.go:81] duration metric: took 366.1192ms for pod "kube-proxy-bjv9b" in "kube-system" namespace to be "Ready" ...
	E0419 18:59:07.914762   14960 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348000-m02" hosting pod "kube-proxy-bjv9b" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000-m02" has status "Ready":"Unknown"
	I0419 18:59:07.914762   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kj76x" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:08.113272   14960 request.go:629] Waited for 198.1954ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kj76x
	I0419 18:59:08.113485   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kj76x
	I0419 18:59:08.113485   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:08.113485   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:08.113485   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:08.118071   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:08.118071   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:08.118071   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:08.118071   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:08.118071   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:08 GMT
	I0419 18:59:08.118071   14960 round_trippers.go:580]     Audit-Id: d640072f-850f-4e7a-b610-f17bcf62a58d
	I0419 18:59:08.118071   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:08.118071   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:08.118762   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kj76x","generateName":"kube-proxy-","namespace":"kube-system","uid":"274342c4-c21f-4279-b0ea-743d8e2c1463","resourceVersion":"1750","creationTimestamp":"2024-04-20T01:35:22Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0419 18:59:08.303557   14960 request.go:629] Waited for 184.3049ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:08.303669   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:08.303723   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:08.303723   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:08.303723   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:08.307071   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:08.307071   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:08.307071   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:08 GMT
	I0419 18:59:08.307071   14960 round_trippers.go:580]     Audit-Id: c3a8878b-de3c-448e-80a5-8f98e8f88f18
	I0419 18:59:08.307071   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:08.307071   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:08.307071   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:08.307071   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:08.307071   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:08.307071   14960 pod_ready.go:92] pod "kube-proxy-kj76x" in "kube-system" namespace has status "Ready":"True"
	I0419 18:59:08.307071   14960 pod_ready.go:81] duration metric: took 392.3086ms for pod "kube-proxy-kj76x" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:08.307071   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:08.509840   14960 request.go:629] Waited for 202.6854ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348000
	I0419 18:59:08.509840   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348000
	I0419 18:59:08.509840   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:08.509840   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:08.509840   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:08.515634   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:59:08.515634   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:08.515634   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:08.515634   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:08 GMT
	I0419 18:59:08.515634   14960 round_trippers.go:580]     Audit-Id: 0cedc28b-6be5-4d75-a299-e4297f58ea50
	I0419 18:59:08.515634   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:08.515634   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:08.515891   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:08.516129   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-348000","namespace":"kube-system","uid":"000cfafe-a513-4738-9de2-3c25244b72be","resourceVersion":"1824","creationTimestamp":"2024-04-20T01:35:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"92813b2aed63b63058d3fd06709fa24e","kubernetes.io/config.mirror":"92813b2aed63b63058d3fd06709fa24e","kubernetes.io/config.seen":"2024-04-20T01:35:08.321899460Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0419 18:59:08.712798   14960 request.go:629] Waited for 195.3539ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:08.712798   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 18:59:08.712798   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:08.712798   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:08.712798   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:08.716425   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 18:59:08.717222   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:08.717222   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:08.717222   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:08.717222   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:08.717222   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:08 GMT
	I0419 18:59:08.717222   14960 round_trippers.go:580]     Audit-Id: 052f1365-af7d-4a4c-87bb-d2c6961f5fb4
	I0419 18:59:08.717222   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:08.717222   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 18:59:08.718327   14960 pod_ready.go:92] pod "kube-scheduler-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 18:59:08.718327   14960 pod_ready.go:81] duration metric: took 411.2544ms for pod "kube-scheduler-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 18:59:08.718327   14960 pod_ready.go:38] duration metric: took 28.2332658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 18:59:08.718327   14960 api_server.go:52] waiting for apiserver process to appear ...
	I0419 18:59:08.729751   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 18:59:08.754094   14960 command_runner.go:130] > bd3aa93bac25
	I0419 18:59:08.754215   14960 logs.go:276] 1 containers: [bd3aa93bac25]
	I0419 18:59:08.764137   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 18:59:08.785717   14960 command_runner.go:130] > 2deabe4dbdf4
	I0419 18:59:08.785790   14960 logs.go:276] 1 containers: [2deabe4dbdf4]
	I0419 18:59:08.796593   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 18:59:08.827474   14960 command_runner.go:130] > 352cf21a3e20
	I0419 18:59:08.828457   14960 command_runner.go:130] > 627b84abf45c
	I0419 18:59:08.828457   14960 logs.go:276] 2 containers: [352cf21a3e20 627b84abf45c]
	I0419 18:59:08.838185   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 18:59:08.862005   14960 command_runner.go:130] > d57aee391c14
	I0419 18:59:08.862005   14960 command_runner.go:130] > e476774b8f77
	I0419 18:59:08.863002   14960 logs.go:276] 2 containers: [d57aee391c14 e476774b8f77]
	I0419 18:59:08.872905   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 18:59:08.893884   14960 command_runner.go:130] > e438af0f1ec9
	I0419 18:59:08.893884   14960 command_runner.go:130] > a6586791413d
	I0419 18:59:08.894266   14960 logs.go:276] 2 containers: [e438af0f1ec9 a6586791413d]
	I0419 18:59:08.904440   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 18:59:08.931190   14960 command_runner.go:130] > b67f2295d26c
	I0419 18:59:08.932028   14960 command_runner.go:130] > 9638ddcd5428
	I0419 18:59:08.932028   14960 logs.go:276] 2 containers: [b67f2295d26c 9638ddcd5428]
	I0419 18:59:08.943113   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 18:59:08.966177   14960 command_runner.go:130] > ae0b21715f86
	I0419 18:59:08.966177   14960 command_runner.go:130] > f8c798c99407
	I0419 18:59:08.966877   14960 logs.go:276] 2 containers: [ae0b21715f86 f8c798c99407]
	I0419 18:59:08.966877   14960 logs.go:123] Gathering logs for dmesg ...
	I0419 18:59:08.966877   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 18:59:08.996433   14960 command_runner.go:130] > [Apr20 01:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0419 18:59:08.996848   14960 command_runner.go:130] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0419 18:59:08.996848   14960 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0419 18:59:08.996848   14960 command_runner.go:130] > [  +0.134823] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0419 18:59:08.996965   14960 command_runner.go:130] > [  +0.023006] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0419 18:59:08.996965   14960 command_runner.go:130] > [  +0.000006] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0419 18:59:08.996965   14960 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0419 18:59:08.996965   14960 command_runner.go:130] > [  +0.065433] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0419 18:59:08.997080   14960 command_runner.go:130] > [  +0.022829] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0419 18:59:08.997080   14960 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0419 18:59:08.997080   14960 command_runner.go:130] > [  +5.461945] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0419 18:59:08.997142   14960 command_runner.go:130] > [  +0.733998] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0419 18:59:08.997142   14960 command_runner.go:130] > [  +1.817887] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0419 18:59:08.997142   14960 command_runner.go:130] > [  +7.031305] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0419 18:59:08.997142   14960 command_runner.go:130] > [  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0419 18:59:08.997142   14960 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0419 18:59:08.997142   14960 command_runner.go:130] > [Apr20 01:57] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	I0419 18:59:08.997142   14960 command_runner.go:130] > [  +0.209815] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	I0419 18:59:08.997239   14960 command_runner.go:130] > [ +26.622359] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0419 18:59:08.997239   14960 command_runner.go:130] > [  +0.115734] kauditd_printk_skb: 73 callbacks suppressed
	I0419 18:59:08.997239   14960 command_runner.go:130] > [  +0.605928] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	I0419 18:59:08.997239   14960 command_runner.go:130] > [  +0.209234] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	I0419 18:59:08.997239   14960 command_runner.go:130] > [  +0.243987] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	I0419 18:59:08.997239   14960 command_runner.go:130] > [  +2.954231] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0419 18:59:08.997239   14960 command_runner.go:130] > [  +0.209781] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	I0419 18:59:08.997239   14960 command_runner.go:130] > [  +0.225214] systemd-fstab-generator[1255]: Ignoring "noauto" option for root device
	I0419 18:59:08.997239   14960 command_runner.go:130] > [  +0.313735] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	I0419 18:59:08.997239   14960 command_runner.go:130] > [  +0.929646] systemd-fstab-generator[1383]: Ignoring "noauto" option for root device
	I0419 18:59:08.997239   14960 command_runner.go:130] > [  +0.108494] kauditd_printk_skb: 205 callbacks suppressed
	I0419 18:59:08.997400   14960 command_runner.go:130] > [  +3.650728] systemd-fstab-generator[1520]: Ignoring "noauto" option for root device
	I0419 18:59:08.997400   14960 command_runner.go:130] > [  +1.371725] kauditd_printk_skb: 49 callbacks suppressed
	I0419 18:59:08.997400   14960 command_runner.go:130] > [Apr20 01:58] kauditd_printk_skb: 25 callbacks suppressed
	I0419 18:59:08.997400   14960 command_runner.go:130] > [  +3.878920] systemd-fstab-generator[2324]: Ignoring "noauto" option for root device
	I0419 18:59:08.997400   14960 command_runner.go:130] > [  +7.552702] kauditd_printk_skb: 70 callbacks suppressed
	I0419 18:59:08.999638   14960 logs.go:123] Gathering logs for coredns [352cf21a3e20] ...
	I0419 18:59:08.999704   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 352cf21a3e20"
	I0419 18:59:09.041411   14960 command_runner.go:130] > .:53
	I0419 18:59:09.041570   14960 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93714cfd58e203ac2baa48ea9c7b435951d2a9faed7a5c70b4e84c89c6c1fe4c1dfa41f14b3ebf0f5941dade673a82eaad960061e673dd78dcb856db3393b39d
	I0419 18:59:09.041570   14960 command_runner.go:130] > CoreDNS-1.11.1
	I0419 18:59:09.041570   14960 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0419 18:59:09.041570   14960 command_runner.go:130] > [INFO] 127.0.0.1:51206 - 14298 "HINFO IN 4972057462503628469.2167329557243878603. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028297062s
	I0419 18:59:09.042033   14960 logs.go:123] Gathering logs for kube-scheduler [d57aee391c14] ...
	I0419 18:59:09.042033   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57aee391c14"
	I0419 18:59:09.074641   14960 command_runner.go:130] ! I0420 01:57:58.020728       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:09.074740   14960 command_runner.go:130] ! I0420 01:58:00.771749       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0419 18:59:09.074927   14960 command_runner.go:130] ! I0420 01:58:00.771906       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:09.075024   14960 command_runner.go:130] ! I0420 01:58:00.785599       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0419 18:59:09.075154   14960 command_runner.go:130] ! I0420 01:58:00.785824       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0419 18:59:09.075154   14960 command_runner.go:130] ! I0420 01:58:00.785929       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 18:59:09.075154   14960 command_runner.go:130] ! I0420 01:58:00.785956       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:09.075154   14960 command_runner.go:130] ! I0420 01:58:00.785972       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0419 18:59:09.075154   14960 command_runner.go:130] ! I0420 01:58:00.786046       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0419 18:59:09.075154   14960 command_runner.go:130] ! I0420 01:58:00.786323       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0419 18:59:09.075154   14960 command_runner.go:130] ! I0420 01:58:00.786915       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:09.075154   14960 command_runner.go:130] ! I0420 01:58:00.887091       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0419 18:59:09.076050   14960 command_runner.go:130] ! I0420 01:58:00.887476       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:09.076050   14960 command_runner.go:130] ! I0420 01:58:00.888293       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0419 18:59:09.079596   14960 logs.go:123] Gathering logs for kube-proxy [e438af0f1ec9] ...
	I0419 18:59:09.079629   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e438af0f1ec9"
	I0419 18:59:09.105026   14960 command_runner.go:130] ! I0420 01:58:03.129201       1 server_linux.go:69] "Using iptables proxy"
	I0419 18:59:09.105026   14960 command_runner.go:130] ! I0420 01:58:03.201631       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.42.24"]
	I0419 18:59:09.105026   14960 command_runner.go:130] ! I0420 01:58:03.344058       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 18:59:09.105026   14960 command_runner.go:130] ! I0420 01:58:03.344107       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 18:59:09.105026   14960 command_runner.go:130] ! I0420 01:58:03.344137       1 server_linux.go:165] "Using iptables Proxier"
	I0419 18:59:09.105881   14960 command_runner.go:130] ! I0420 01:58:03.353394       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 18:59:09.105924   14960 command_runner.go:130] ! I0420 01:58:03.354462       1 server.go:872] "Version info" version="v1.30.0"
	I0419 18:59:09.105924   14960 command_runner.go:130] ! I0420 01:58:03.354693       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:09.105924   14960 command_runner.go:130] ! I0420 01:58:03.358325       1 config.go:192] "Starting service config controller"
	I0419 18:59:09.105924   14960 command_runner.go:130] ! I0420 01:58:03.358366       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 18:59:09.105924   14960 command_runner.go:130] ! I0420 01:58:03.358985       1 config.go:101] "Starting endpoint slice config controller"
	I0419 18:59:09.105992   14960 command_runner.go:130] ! I0420 01:58:03.359176       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 18:59:09.105992   14960 command_runner.go:130] ! I0420 01:58:03.358997       1 config.go:319] "Starting node config controller"
	I0419 18:59:09.106046   14960 command_runner.go:130] ! I0420 01:58:03.368409       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 18:59:09.106085   14960 command_runner.go:130] ! I0420 01:58:03.459372       1 shared_informer.go:320] Caches are synced for service config
	I0419 18:59:09.106085   14960 command_runner.go:130] ! I0420 01:58:03.459745       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 18:59:09.106085   14960 command_runner.go:130] ! I0420 01:58:03.470538       1 shared_informer.go:320] Caches are synced for node config
	I0419 18:59:09.108043   14960 logs.go:123] Gathering logs for kube-controller-manager [b67f2295d26c] ...
	I0419 18:59:09.108043   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67f2295d26c"
	I0419 18:59:09.137861   14960 command_runner.go:130] ! I0420 01:57:58.124915       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:09.138926   14960 command_runner.go:130] ! I0420 01:57:58.572589       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0419 18:59:09.139991   14960 command_runner.go:130] ! I0420 01:57:58.572759       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:09.140029   14960 command_runner.go:130] ! I0420 01:57:58.576545       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:09.140081   14960 command_runner.go:130] ! I0420 01:57:58.576765       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:09.140081   14960 command_runner.go:130] ! I0420 01:57:58.577138       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0419 18:59:09.140081   14960 command_runner.go:130] ! I0420 01:57:58.577308       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:09.140081   14960 command_runner.go:130] ! I0420 01:58:02.671844       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0419 18:59:09.140138   14960 command_runner.go:130] ! I0420 01:58:02.672396       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0419 18:59:09.140138   14960 command_runner.go:130] ! I0420 01:58:02.683222       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0419 18:59:09.140202   14960 command_runner.go:130] ! I0420 01:58:02.683502       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0419 18:59:09.140202   14960 command_runner.go:130] ! I0420 01:58:02.683748       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0419 18:59:09.140202   14960 command_runner.go:130] ! I0420 01:58:02.684992       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0419 18:59:09.140202   14960 command_runner.go:130] ! I0420 01:58:02.685159       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0419 18:59:09.140268   14960 command_runner.go:130] ! I0420 01:58:02.689572       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0419 18:59:09.140268   14960 command_runner.go:130] ! I0420 01:58:02.693653       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0419 18:59:09.140268   14960 command_runner.go:130] ! I0420 01:58:02.694118       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0419 18:59:09.140336   14960 command_runner.go:130] ! I0420 01:58:02.694295       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0419 18:59:09.140336   14960 command_runner.go:130] ! I0420 01:58:02.695565       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0419 18:59:09.140336   14960 command_runner.go:130] ! I0420 01:58:02.695757       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0419 18:59:09.140426   14960 command_runner.go:130] ! I0420 01:58:02.700089       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0419 18:59:09.140426   14960 command_runner.go:130] ! I0420 01:58:02.700328       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0419 18:59:09.140461   14960 command_runner.go:130] ! I0420 01:58:02.700370       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0419 18:59:09.140461   14960 command_runner.go:130] ! I0420 01:58:02.708704       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0419 18:59:09.140499   14960 command_runner.go:130] ! I0420 01:58:02.712057       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.712325       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.712551       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0419 18:59:09.140531   14960 command_runner.go:130] ! E0420 01:58:02.728628       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.728672       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! E0420 01:58:02.742147       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.742194       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.742206       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.748098       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.748399       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.748420       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.752218       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.752332       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.752344       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.765569       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.765610       1 shared_informer.go:313] Waiting for caches to sync for job
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.765645       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.772658       1 shared_informer.go:320] Caches are synced for tokens
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.773270       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.773287       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.786700       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.788042       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.799412       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.804126       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.804238       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0419 18:59:09.140531   14960 command_runner.go:130] ! I0420 01:58:02.814226       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.818062       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.818127       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.868296       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.868361       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.868379       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.870217       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873404       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873440       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! W0420 01:58:02.873461       1 shared_informer.go:597] resyncPeriod 18h17m32.022460908s is smaller than resyncCheckPeriod 19h9m29.930546571s and the informer has already started. Changing it to 19h9m29.930546571s
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873587       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873612       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873690       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873722       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873768       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873784       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873852       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873883       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873963       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.873989       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.874019       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.874045       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.874084       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.874104       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.874180       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.874255       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.874269       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.874289       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.910217       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.910746       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.912220       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.928174       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.928508       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.928473       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.929874       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.931641       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.931894       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.932890       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.934333       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.934546       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.934881       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.939106       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:02.939460       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.968845       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.968916       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.969733       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.969944       1 shared_informer.go:313] Waiting for caches to sync for node
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.975888       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.977148       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.977216       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.978712       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.979007       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.979040       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.982094       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.982639       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:12.982957       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.032307       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.032749       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.035306       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.036848       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.037653       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.038965       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.039366       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.039352       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.040679       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.040782       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.040908       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.041738       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.041781       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.042295       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.041839       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.042314       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0419 18:59:09.141222   14960 command_runner.go:130] ! I0420 01:58:13.041850       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:09.143989   14960 command_runner.go:130] ! I0420 01:58:13.042715       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:09.143989   14960 command_runner.go:130] ! I0420 01:58:13.046953       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0419 18:59:09.143989   14960 command_runner.go:130] ! I0420 01:58:13.047617       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0419 18:59:09.143989   14960 command_runner.go:130] ! I0420 01:58:13.047660       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0419 18:59:09.144101   14960 command_runner.go:130] ! I0420 01:58:13.047670       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0419 18:59:09.144101   14960 command_runner.go:130] ! I0420 01:58:13.050144       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0419 18:59:09.144101   14960 command_runner.go:130] ! I0420 01:58:13.050286       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0419 18:59:09.144101   14960 command_runner.go:130] ! I0420 01:58:13.050982       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0419 18:59:09.144173   14960 command_runner.go:130] ! I0420 01:58:13.051033       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0419 18:59:09.144173   14960 command_runner.go:130] ! I0420 01:58:13.051061       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0419 18:59:09.144230   14960 command_runner.go:130] ! I0420 01:58:13.054294       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0419 18:59:09.144273   14960 command_runner.go:130] ! I0420 01:58:13.054709       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0419 18:59:09.144273   14960 command_runner.go:130] ! I0420 01:58:13.054987       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0419 18:59:09.144273   14960 command_runner.go:130] ! I0420 01:58:13.057961       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0419 18:59:09.144273   14960 command_runner.go:130] ! I0420 01:58:13.058399       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0419 18:59:09.144338   14960 command_runner.go:130] ! I0420 01:58:13.058606       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0419 18:59:09.144338   14960 command_runner.go:130] ! I0420 01:58:13.060766       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.061307       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.060852       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.061691       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.064061       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.064698       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.065134       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.067945       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.068315       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.068613       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.077312       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.077939       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.078050       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.078623       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.083275       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.083591       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.083702       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.090751       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.091149       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.091393       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.091591       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.096868       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.097085       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.100720       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.101287       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.101375       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.103459       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.106949       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.107026       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.116002       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.139685       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.148344       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.152489       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.140934       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.151083       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000\" does not exist"
	I0419 18:59:09.144386   14960 command_runner.go:130] ! I0420 01:58:13.141105       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0419 18:59:09.144971   14960 command_runner.go:130] ! I0420 01:58:13.156086       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:09.144971   14960 command_runner.go:130] ! I0420 01:58:13.156676       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m02\" does not exist"
	I0419 18:59:09.145018   14960 command_runner.go:130] ! I0420 01:58:13.156750       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m03\" does not exist"
	I0419 18:59:09.145018   14960 command_runner.go:130] ! I0420 01:58:13.156865       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.145018   14960 command_runner.go:130] ! I0420 01:58:13.142425       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0419 18:59:09.145111   14960 command_runner.go:130] ! I0420 01:58:13.157020       1 shared_informer.go:320] Caches are synced for expand
	I0419 18:59:09.145111   14960 command_runner.go:130] ! I0420 01:58:13.159992       1 shared_informer.go:320] Caches are synced for ephemeral
	I0419 18:59:09.145144   14960 command_runner.go:130] ! I0420 01:58:13.145957       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:09.145191   14960 command_runner.go:130] ! I0420 01:58:13.162320       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.165325       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.165759       1 shared_informer.go:320] Caches are synced for job
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.169537       1 shared_informer.go:320] Caches are synced for service account
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.171293       1 shared_informer.go:320] Caches are synced for node
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.178178       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.178222       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.178230       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.178237       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.178270       1 shared_informer.go:320] Caches are synced for attach detach
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.179699       1 shared_informer.go:320] Caches are synced for PV protection
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.183856       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.183905       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.188521       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.195859       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.200417       1 shared_informer.go:320] Caches are synced for crt configmap
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.201881       1 shared_informer.go:320] Caches are synced for persistent volume
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.204647       1 shared_informer.go:320] Caches are synced for endpoint
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.207356       1 shared_informer.go:320] Caches are synced for PVC protection
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.213532       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.214173       1 shared_informer.go:320] Caches are synced for namespace
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.219105       1 shared_informer.go:320] Caches are synced for GC
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.228919       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.535929ms"
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.230155       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.901µs"
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.230170       1 shared_informer.go:320] Caches are synced for HPA
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.234086       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.236046       1 shared_informer.go:320] Caches are synced for TTL
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.240266       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.682408ms"
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.240992       1 shared_informer.go:320] Caches are synced for deployment
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.243741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="114.104µs"
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.248776       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.252859       1 shared_informer.go:320] Caches are synced for daemon sets
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.253008       1 shared_informer.go:320] Caches are synced for taint
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.259997       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.297486       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000"
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.297542       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m02"
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.297627       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m03"
	I0419 18:59:09.145220   14960 command_runner.go:130] ! I0420 01:58:13.297865       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0419 18:59:09.145801   14960 command_runner.go:130] ! I0420 01:58:13.335459       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0419 18:59:09.145801   14960 command_runner.go:130] ! I0420 01:58:13.374436       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:09.145801   14960 command_runner.go:130] ! I0420 01:58:13.389294       1 shared_informer.go:320] Caches are synced for cronjob
	I0419 18:59:09.145848   14960 command_runner.go:130] ! I0420 01:58:13.392315       1 shared_informer.go:320] Caches are synced for disruption
	I0419 18:59:09.145848   14960 command_runner.go:130] ! I0420 01:58:13.397172       1 shared_informer.go:320] Caches are synced for stateful set
	I0419 18:59:09.145848   14960 command_runner.go:130] ! I0420 01:58:13.416186       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:09.145848   14960 command_runner.go:130] ! I0420 01:58:13.857437       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:09.145848   14960 command_runner.go:130] ! I0420 01:58:13.878325       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:09.145848   14960 command_runner.go:130] ! I0420 01:58:13.878534       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0419 18:59:09.145848   14960 command_runner.go:130] ! I0420 01:58:40.290168       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.145971   14960 command_runner.go:130] ! I0420 01:58:53.395955       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.694507ms"
	I0419 18:59:09.145971   14960 command_runner.go:130] ! I0420 01:58:53.396146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.003µs"
	I0419 18:59:09.146005   14960 command_runner.go:130] ! I0420 01:59:07.033370       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.713655ms"
	I0419 18:59:09.146005   14960 command_runner.go:130] ! I0420 01:59:07.033533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.092µs"
	I0419 18:59:09.146058   14960 command_runner.go:130] ! I0420 01:59:07.047220       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.391µs"
	I0419 18:59:09.146058   14960 command_runner.go:130] ! I0420 01:59:07.121391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.338984ms"
	I0419 18:59:09.146100   14960 command_runner.go:130] ! I0420 01:59:07.121503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.691µs"
	I0419 18:59:09.162742   14960 logs.go:123] Gathering logs for container status ...
	I0419 18:59:09.162742   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 18:59:09.238662   14960 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0419 18:59:09.238662   14960 command_runner.go:130] > d608b74b0597f       8c811b4aec35f                                                                                         4 seconds ago        Running             busybox                   1                   75ff9f4e9dde2       busybox-fc5497c4f-xnz2k
	I0419 18:59:09.238662   14960 command_runner.go:130] > 352cf21a3e202       cbb01a7bd410d                                                                                         4 seconds ago        Running             coredns                   1                   f28a1e746a9b4       coredns-7db6d8ff4d-7w477
	I0419 18:59:09.238662   14960 command_runner.go:130] > c6f350bee7762       6e38f40d628db                                                                                         24 seconds ago       Running             storage-provisioner       2                   5472c1fba3929       storage-provisioner
	I0419 18:59:09.238662   14960 command_runner.go:130] > ae0b21715f861       4950bb10b3f87                                                                                         33 seconds ago       Running             kindnet-cni               2                   b5a777eba295e       kindnet-s4fsr
	I0419 18:59:09.238662   14960 command_runner.go:130] > f8c798c994078       4950bb10b3f87                                                                                         About a minute ago   Exited              kindnet-cni               1                   b5a777eba295e       kindnet-s4fsr
	I0419 18:59:09.238662   14960 command_runner.go:130] > 45383c4290ad1       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   5472c1fba3929       storage-provisioner
	I0419 18:59:09.238662   14960 command_runner.go:130] > e438af0f1ec9e       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   09f65a6953038       kube-proxy-kj76x
	I0419 18:59:09.238662   14960 command_runner.go:130] > 2deabe4dbdf41       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   ab9ff1d906880       etcd-multinode-348000
	I0419 18:59:09.238662   14960 command_runner.go:130] > bd3aa93bac25b       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   d7052a6f04def       kube-apiserver-multinode-348000
	I0419 18:59:09.238662   14960 command_runner.go:130] > b67f2295d26ca       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   118cca57d1f54       kube-controller-manager-multinode-348000
	I0419 18:59:09.239687   14960 command_runner.go:130] > d57aee391c146       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   e8baa597c1467       kube-scheduler-multinode-348000
	I0419 18:59:09.239687   14960 command_runner.go:130] > d8afb3e1fb946       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   476e3efb38684       busybox-fc5497c4f-xnz2k
	I0419 18:59:09.239740   14960 command_runner.go:130] > 627b84abf45cd       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   2dd294415aae1       coredns-7db6d8ff4d-7w477
	I0419 18:59:09.239740   14960 command_runner.go:130] > a6586791413d0       a0bf559e280cf                                                                                         23 minutes ago       Exited              kube-proxy                0                   7935893e9f22a       kube-proxy-kj76x
	I0419 18:59:09.239807   14960 command_runner.go:130] > 9638ddcd54285       c7aad43836fa5                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   6e420625b84be       kube-controller-manager-multinode-348000
	I0419 18:59:09.239871   14960 command_runner.go:130] > e476774b8f77e       259c8277fcbbc                                                                                         24 minutes ago       Exited              kube-scheduler            0                   e5d733991bf1a       kube-scheduler-multinode-348000
	I0419 18:59:09.242765   14960 logs.go:123] Gathering logs for coredns [627b84abf45c] ...
	I0419 18:59:09.242815   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627b84abf45c"
	I0419 18:59:09.283621   14960 command_runner.go:130] > .:53
	I0419 18:59:09.283621   14960 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93714cfd58e203ac2baa48ea9c7b435951d2a9faed7a5c70b4e84c89c6c1fe4c1dfa41f14b3ebf0f5941dade673a82eaad960061e673dd78dcb856db3393b39d
	I0419 18:59:09.283621   14960 command_runner.go:130] > CoreDNS-1.11.1
	I0419 18:59:09.283621   14960 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0419 18:59:09.283800   14960 command_runner.go:130] > [INFO] 127.0.0.1:37904 - 37003 "HINFO IN 1336380353163369387.5260466772500757990. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.053891439s
	I0419 18:59:09.283800   14960 command_runner.go:130] > [INFO] 10.244.1.2:47846 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002913s
	I0419 18:59:09.283800   14960 command_runner.go:130] > [INFO] 10.244.1.2:60728 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.118385602s
	I0419 18:59:09.283800   14960 command_runner.go:130] > [INFO] 10.244.1.2:48827 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.043741711s
	I0419 18:59:09.283800   14960 command_runner.go:130] > [INFO] 10.244.1.2:57126 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.111854404s
	I0419 18:59:09.283893   14960 command_runner.go:130] > [INFO] 10.244.0.3:44468 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001971s
	I0419 18:59:09.283893   14960 command_runner.go:130] > [INFO] 10.244.0.3:58477 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.002287005s
	I0419 18:59:09.283893   14960 command_runner.go:130] > [INFO] 10.244.0.3:39825 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000198301s
	I0419 18:59:09.283893   14960 command_runner.go:130] > [INFO] 10.244.0.3:54956 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000604s
	I0419 18:59:09.283893   14960 command_runner.go:130] > [INFO] 10.244.1.2:48593 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001261s
	I0419 18:59:09.283893   14960 command_runner.go:130] > [INFO] 10.244.1.2:58743 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.027871268s
	I0419 18:59:09.283979   14960 command_runner.go:130] > [INFO] 10.244.1.2:44517 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002274s
	I0419 18:59:09.283979   14960 command_runner.go:130] > [INFO] 10.244.1.2:35998 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000219501s
	I0419 18:59:09.283979   14960 command_runner.go:130] > [INFO] 10.244.1.2:58770 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012982932s
	I0419 18:59:09.283979   14960 command_runner.go:130] > [INFO] 10.244.1.2:55456 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174201s
	I0419 18:59:09.284062   14960 command_runner.go:130] > [INFO] 10.244.1.2:59031 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001304s
	I0419 18:59:09.284062   14960 command_runner.go:130] > [INFO] 10.244.1.2:41687 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000198401s
	I0419 18:59:09.284062   14960 command_runner.go:130] > [INFO] 10.244.0.3:46929 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003044s
	I0419 18:59:09.284062   14960 command_runner.go:130] > [INFO] 10.244.0.3:35877 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000325701s
	I0419 18:59:09.284138   14960 command_runner.go:130] > [INFO] 10.244.0.3:53705 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000318601s
	I0419 18:59:09.284138   14960 command_runner.go:130] > [INFO] 10.244.0.3:40560 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164401s
	I0419 18:59:09.284138   14960 command_runner.go:130] > [INFO] 10.244.0.3:53239 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001239s
	I0419 18:59:09.284138   14960 command_runner.go:130] > [INFO] 10.244.0.3:39754 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001464s
	I0419 18:59:09.284138   14960 command_runner.go:130] > [INFO] 10.244.0.3:41397 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001668s
	I0419 18:59:09.284255   14960 command_runner.go:130] > [INFO] 10.244.0.3:49126 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001646s
	I0419 18:59:09.284255   14960 command_runner.go:130] > [INFO] 10.244.1.2:37850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115501s
	I0419 18:59:09.284255   14960 command_runner.go:130] > [INFO] 10.244.1.2:44063 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001443s
	I0419 18:59:09.284255   14960 command_runner.go:130] > [INFO] 10.244.1.2:39924 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000607s
	I0419 18:59:09.284255   14960 command_runner.go:130] > [INFO] 10.244.1.2:53244 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000622s
	I0419 18:59:09.284331   14960 command_runner.go:130] > [INFO] 10.244.0.3:52017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001879s
	I0419 18:59:09.284331   14960 command_runner.go:130] > [INFO] 10.244.0.3:55488 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000814s
	I0419 18:59:09.284331   14960 command_runner.go:130] > [INFO] 10.244.0.3:57536 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000778s
	I0419 18:59:09.284405   14960 command_runner.go:130] > [INFO] 10.244.0.3:45454 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001788s
	I0419 18:59:09.284405   14960 command_runner.go:130] > [INFO] 10.244.1.2:52247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001095s
	I0419 18:59:09.284405   14960 command_runner.go:130] > [INFO] 10.244.1.2:46954 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001143s
	I0419 18:59:09.284405   14960 command_runner.go:130] > [INFO] 10.244.1.2:47574 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098701s
	I0419 18:59:09.284477   14960 command_runner.go:130] > [INFO] 10.244.1.2:36658 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000170301s
	I0419 18:59:09.284477   14960 command_runner.go:130] > [INFO] 10.244.0.3:35421 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001002s
	I0419 18:59:09.284477   14960 command_runner.go:130] > [INFO] 10.244.0.3:41995 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132201s
	I0419 18:59:09.284477   14960 command_runner.go:130] > [INFO] 10.244.0.3:36431 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001956s
	I0419 18:59:09.284477   14960 command_runner.go:130] > [INFO] 10.244.0.3:38168 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000222s
	I0419 18:59:09.284549   14960 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0419 18:59:09.284549   14960 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0419 18:59:09.287225   14960 logs.go:123] Gathering logs for kube-proxy [a6586791413d] ...
	I0419 18:59:09.287225   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6586791413d"
	I0419 18:59:09.315630   14960 command_runner.go:130] ! I0420 01:35:26.120497       1 server_linux.go:69] "Using iptables proxy"
	I0419 18:59:09.315630   14960 command_runner.go:130] ! I0420 01:35:26.156956       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.42.231"]
	I0419 18:59:09.315630   14960 command_runner.go:130] ! I0420 01:35:26.208282       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 18:59:09.316431   14960 command_runner.go:130] ! I0420 01:35:26.208472       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 18:59:09.316431   14960 command_runner.go:130] ! I0420 01:35:26.208501       1 server_linux.go:165] "Using iptables Proxier"
	I0419 18:59:09.316461   14960 command_runner.go:130] ! I0420 01:35:26.214693       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 18:59:09.316461   14960 command_runner.go:130] ! I0420 01:35:26.216114       1 server.go:872] "Version info" version="v1.30.0"
	I0419 18:59:09.316461   14960 command_runner.go:130] ! I0420 01:35:26.216181       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:09.316461   14960 command_runner.go:130] ! I0420 01:35:26.219192       1 config.go:192] "Starting service config controller"
	I0419 18:59:09.316461   14960 command_runner.go:130] ! I0420 01:35:26.219810       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 18:59:09.316461   14960 command_runner.go:130] ! I0420 01:35:26.220079       1 config.go:101] "Starting endpoint slice config controller"
	I0419 18:59:09.316461   14960 command_runner.go:130] ! I0420 01:35:26.220093       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 18:59:09.316582   14960 command_runner.go:130] ! I0420 01:35:26.221802       1 config.go:319] "Starting node config controller"
	I0419 18:59:09.316582   14960 command_runner.go:130] ! I0420 01:35:26.221980       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 18:59:09.316644   14960 command_runner.go:130] ! I0420 01:35:26.320313       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 18:59:09.316644   14960 command_runner.go:130] ! I0420 01:35:26.320380       1 shared_informer.go:320] Caches are synced for service config
	I0419 18:59:09.316644   14960 command_runner.go:130] ! I0420 01:35:26.322323       1 shared_informer.go:320] Caches are synced for node config
	I0419 18:59:09.319064   14960 logs.go:123] Gathering logs for kubelet ...
	I0419 18:59:09.319150   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 18:59:09.351267   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0419 18:59:09.351340   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: I0420 01:57:51.575772    1390 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0419 18:59:09.351340   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: I0420 01:57:51.576306    1390 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:09.351395   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: I0420 01:57:51.577194    1390 server.go:927] "Client rotation is on, will bootstrap in background"
	I0419 18:59:09.351430   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: E0420 01:57:51.579651    1390 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0419 18:59:09.351474   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:09.351515   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0419 18:59:09.351515   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0419 18:59:09.351563   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0419 18:59:09.351563   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0419 18:59:09.351563   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: I0420 01:57:52.300689    1443 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0419 18:59:09.351602   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: I0420 01:57:52.301056    1443 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:09.351649   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: I0420 01:57:52.301551    1443 server.go:927] "Client rotation is on, will bootstrap in background"
	I0419 18:59:09.351689   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: E0420 01:57:52.301845    1443 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0419 18:59:09.351689   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:09.351740   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0419 18:59:09.351740   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0419 18:59:09.351798   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0419 18:59:09.351798   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.955182    1526 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0419 18:59:09.351839   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.955367    1526 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:09.351839   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.955676    1526 server.go:927] "Client rotation is on, will bootstrap in background"
	I0419 18:59:09.351884   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.957661    1526 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0419 18:59:09.351884   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.971626    1526 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:09.351939   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.998144    1526 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0419 18:59:09.351976   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.998312    1526 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0419 18:59:09.351976   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.999775    1526 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0419 18:59:09.352115   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:54.999948    1526 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-348000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0419 18:59:09.352150   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.000770    1526 topology_manager.go:138] "Creating topology manager with none policy"
	I0419 18:59:09.352150   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.000879    1526 container_manager_linux.go:301] "Creating device plugin manager"
	I0419 18:59:09.352197   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.001855    1526 state_mem.go:36] "Initialized new in-memory state store"
	I0419 18:59:09.352197   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.003861    1526 kubelet.go:400] "Attempting to sync node with API server"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.003952    1526 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.004045    1526 kubelet.go:312] "Adding apiserver pod source"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.009472    1526 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.017989    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.018091    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.019381    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.019428    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.019619    1526 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.1" apiVersion="v1"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.022328    1526 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.023051    1526 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.025680    1526 server.go:1264] "Started kubelet"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.028955    1526 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.031361    1526 server.go:455] "Adding debug handlers to kubelet server"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.034499    1526 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.035670    1526 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.036524    1526 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.19.42.24:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-348000.17c7da5cb9bb1787  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-348000,UID:multinode-348000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-348000,},FirstTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,LastTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-348
000,}"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.053292    1526 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.062175    1526 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.067879    1526 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.097159    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="200ms"
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.116285    1526 factory.go:221] Registration of the systemd container factory successfully
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.117073    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.352234   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.118285    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.352809   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.117970    1526 reconciler.go:26] "Reconciler: start to sync state"
	I0419 18:59:09.352809   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.118962    1526 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0419 18:59:09.352856   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.119576    1526 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0419 18:59:09.352856   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.135081    1526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0419 18:59:09.352912   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.165861    1526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0419 18:59:09.352944   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166700    1526 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166759    1526 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166846    1526 state_mem.go:36] "Initialized new in-memory state store"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166997    1526 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168395    1526 kubelet.go:2337] "Starting kubelet main sync loop"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.168500    1526 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168338    1526 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168585    1526 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168613    1526 policy_none.go:49] "None policy: Start"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.167637    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.171087    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.172453    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.172557    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.187830    1526 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.187946    1526 state_mem.go:35] "Initializing new in-memory state store"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.189368    1526 state_mem.go:75] "Updated machine memory state"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.195268    1526 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.195483    1526 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.197626    1526 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.198638    1526 iptables.go:577] "Could not set up iptables canary" err=<
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.201551    1526 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-348000\" not found"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.269451    1526 topology_manager.go:215] "Topology Admit Handler" podUID="30aa2729d0c65b9f89e1ae2d151edd9b" podNamespace="kube-system" podName="kube-controller-manager-multinode-348000"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.271913    1526 topology_manager.go:215] "Topology Admit Handler" podUID="92813b2aed63b63058d3fd06709fa24e" podNamespace="kube-system" podName="kube-scheduler-multinode-348000"
	I0419 18:59:09.352963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.273779    1526 topology_manager.go:215] "Topology Admit Handler" podUID="af7a3c9321ace7e2a933260472b90113" podNamespace="kube-system" podName="kube-apiserver-multinode-348000"
	I0419 18:59:09.353645   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.275662    1526 topology_manager.go:215] "Topology Admit Handler" podUID="c0cfa3da6a3913c3e67500f6c3e9d72b" podNamespace="kube-system" podName="etcd-multinode-348000"
	I0419 18:59:09.353645   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.281258    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="476e3efb38684054cbc21c027cf1ddd3f9ca47bb829786f8636fd877fd4b2f81"
	I0419 18:59:09.353645   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.281433    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dd294415aae178d6b9bed0368d49bedc6d0afa8f5b9ad0011c73ffcb2c24b3c"
	I0419 18:59:09.353645   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.281454    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5d733991bf1a9e82ffd10768e0652c6c3f983ab24307142345cab3358f068bc"
	I0419 18:59:09.353645   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.297657    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd9e5fae3950c99e6cc71d6166919d407b00212c93827d74e5b83f3896925c0a"
	I0419 18:59:09.353843   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.310354    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="400ms"
	I0419 18:59:09.353843   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.316552    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="187cb57784f4ebcba88e5bf725c118a7d2beec4f543d3864e8f389573f0b11f9"
	I0419 18:59:09.353843   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.332421    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e420625b84be10aa87409a43f4296165b33ed76e82c3ba8a9214abd7177bd38"
	I0419 18:59:09.354005   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.356050    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00d48e11227effb5f0316d58c24e374b4b3f9dcd1b98ac51d6b0038a72d47e42"
	I0419 18:59:09.354005   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.372330    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:09.354005   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.373779    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:09.354088   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.376042    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da1d06ec238f43c7ad43cae75e142a6d15b9c8fb69f88ad8079f167f3f3a6fd4"
	I0419 18:59:09.354088   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.392858    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7935893e9f22a54393d2b3d0a644f7c11a848d5604938074232342a8602e239f"
	I0419 18:59:09.354088   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423082    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-ca-certs\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:09.354173   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423312    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-flexvolume-dir\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423400    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-k8s-certs\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423427    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-kubeconfig\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423456    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af7a3c9321ace7e2a933260472b90113-ca-certs\") pod \"kube-apiserver-multinode-348000\" (UID: \"af7a3c9321ace7e2a933260472b90113\") " pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423489    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/c0cfa3da6a3913c3e67500f6c3e9d72b-etcd-data\") pod \"etcd-multinode-348000\" (UID: \"c0cfa3da6a3913c3e67500f6c3e9d72b\") " pod="kube-system/etcd-multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423525    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423552    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92813b2aed63b63058d3fd06709fa24e-kubeconfig\") pod \"kube-scheduler-multinode-348000\" (UID: \"92813b2aed63b63058d3fd06709fa24e\") " pod="kube-system/kube-scheduler-multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423669    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af7a3c9321ace7e2a933260472b90113-k8s-certs\") pod \"kube-apiserver-multinode-348000\" (UID: \"af7a3c9321ace7e2a933260472b90113\") " pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423703    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af7a3c9321ace7e2a933260472b90113-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-348000\" (UID: \"af7a3c9321ace7e2a933260472b90113\") " pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423739    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/c0cfa3da6a3913c3e67500f6c3e9d72b-etcd-certs\") pod \"etcd-multinode-348000\" (UID: \"c0cfa3da6a3913c3e67500f6c3e9d72b\") " pod="kube-system/etcd-multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.518144    1526 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.19.42.24:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-348000.17c7da5cb9bb1787  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-348000,UID:multinode-348000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-348000,},FirstTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,LastTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-348
000,}"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.713067    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="800ms"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.777032    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.778597    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:09.354256   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.832721    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.354831   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.832971    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.354831   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: W0420 01:57:56.061439    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.354831   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.063005    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.354915   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: W0420 01:57:56.073517    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.354915   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.073647    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.354989   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: W0420 01:57:56.303763    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.355060   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.303918    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:09.355060   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.515345    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="1.6s"
	I0419 18:59:09.355060   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: I0420 01:57:56.583532    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:09.355166   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.584646    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:09.355166   14960 command_runner.go:130] > Apr 20 01:57:58 multinode-348000 kubelet[1526]: I0420 01:57:58.185924    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:09.355166   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.850138    1526 kubelet_node_status.go:112] "Node was previously registered" node="multinode-348000"
	I0419 18:59:09.355241   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.850459    1526 kubelet_node_status.go:76] "Successfully registered node" node="multinode-348000"
	I0419 18:59:09.355241   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.852895    1526 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0419 18:59:09.355241   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.854574    1526 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0419 18:59:09.355340   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.855598    1526 setters.go:580] "Node became not ready" node="multinode-348000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-04-20T01:58:00Z","lastTransitionTime":"2024-04-20T01:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0419 18:59:09.355340   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.022496    1526 apiserver.go:52] "Watching apiserver"
	I0419 18:59:09.355340   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.028549    1526 topology_manager.go:215] "Topology Admit Handler" podUID="274342c4-c21f-4279-b0ea-743d8e2c1463" podNamespace="kube-system" podName="kube-proxy-kj76x"
	I0419 18:59:09.355413   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.028950    1526 topology_manager.go:215] "Topology Admit Handler" podUID="46c91d5e-edfa-4254-a802-148047caeab5" podNamespace="kube-system" podName="kindnet-s4fsr"
	I0419 18:59:09.355413   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.029150    1526 topology_manager.go:215] "Topology Admit Handler" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7w477"
	I0419 18:59:09.355413   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.029359    1526 topology_manager.go:215] "Topology Admit Handler" podUID="ffa0cfb9-91fb-4d5b-abe7-11992c731b74" podNamespace="kube-system" podName="storage-provisioner"
	I0419 18:59:09.355590   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.029596    1526 topology_manager.go:215] "Topology Admit Handler" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916" podNamespace="default" podName="busybox-fc5497c4f-xnz2k"
	I0419 18:59:09.355590   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.030004    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.355699   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.030339    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-348000" podUID="af4afa87-c484-4b73-9a4d-e86ddcd90049"
	I0419 18:59:09.355699   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.031127    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-348000" podUID="18f5e677-6a96-47ee-9f61-60ab9445eb92"
	I0419 18:59:09.355767   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.036486    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.355767   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.078433    1526 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-348000"
	I0419 18:59:09.355767   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.080072    1526 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0419 18:59:09.355836   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.080948    1526 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:09.355836   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.155980    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/274342c4-c21f-4279-b0ea-743d8e2c1463-xtables-lock\") pod \"kube-proxy-kj76x\" (UID: \"274342c4-c21f-4279-b0ea-743d8e2c1463\") " pod="kube-system/kube-proxy-kj76x"
	I0419 18:59:09.355906   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.156217    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/274342c4-c21f-4279-b0ea-743d8e2c1463-lib-modules\") pod \"kube-proxy-kj76x\" (UID: \"274342c4-c21f-4279-b0ea-743d8e2c1463\") " pod="kube-system/kube-proxy-kj76x"
	I0419 18:59:09.355993   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157104    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/46c91d5e-edfa-4254-a802-148047caeab5-cni-cfg\") pod \"kindnet-s4fsr\" (UID: \"46c91d5e-edfa-4254-a802-148047caeab5\") " pod="kube-system/kindnet-s4fsr"
	I0419 18:59:09.355993   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157248    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46c91d5e-edfa-4254-a802-148047caeab5-xtables-lock\") pod \"kindnet-s4fsr\" (UID: \"46c91d5e-edfa-4254-a802-148047caeab5\") " pod="kube-system/kindnet-s4fsr"
	I0419 18:59:09.356065   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.157178    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:09.356065   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.157539    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:01.657504317 +0000 UTC m=+6.817666984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:09.356134   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157392    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ffa0cfb9-91fb-4d5b-abe7-11992c731b74-tmp\") pod \"storage-provisioner\" (UID: \"ffa0cfb9-91fb-4d5b-abe7-11992c731b74\") " pod="kube-system/storage-provisioner"
	I0419 18:59:09.356134   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157844    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46c91d5e-edfa-4254-a802-148047caeab5-lib-modules\") pod \"kindnet-s4fsr\" (UID: \"46c91d5e-edfa-4254-a802-148047caeab5\") " pod="kube-system/kindnet-s4fsr"
	I0419 18:59:09.356243   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.176143    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89aa15d5f8e328791151d96100a36918" path="/var/lib/kubelet/pods/89aa15d5f8e328791151d96100a36918/volumes"
	I0419 18:59:09.356243   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.179130    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fef0b92f87f018a58c19217fdf5d6e1" path="/var/lib/kubelet/pods/8fef0b92f87f018a58c19217fdf5d6e1/volumes"
	I0419 18:59:09.356243   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.206903    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.356243   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.207139    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.356382   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.207264    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:01.707244177 +0000 UTC m=+6.867406744 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.356453   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.241569    1526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-348000" podStartSLOduration=0.241545984 podStartE2EDuration="241.545984ms" podCreationTimestamp="2024-04-20 01:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-20 01:58:01.218870918 +0000 UTC m=+6.379033485" watchObservedRunningTime="2024-04-20 01:58:01.241545984 +0000 UTC m=+6.401708551"
	I0419 18:59:09.356453   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.287607    1526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-348000" podStartSLOduration=0.287584435 podStartE2EDuration="287.584435ms" podCreationTimestamp="2024-04-20 01:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-20 01:58:01.265671392 +0000 UTC m=+6.425834059" watchObservedRunningTime="2024-04-20 01:58:01.287584435 +0000 UTC m=+6.447747102"
	I0419 18:59:09.356535   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.663973    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:09.356535   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.664077    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:02.664058382 +0000 UTC m=+7.824220949 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:09.356604   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.764474    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.356655   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.764518    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.356688   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.764584    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:02.764566131 +0000 UTC m=+7.924728698 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.356688   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: I0420 01:58:02.563904    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5a777eba295e3b640d8d8a60aedcc20243d0f4a6fc4d3f3391b06fc6de0247a"
	I0419 18:59:09.356798   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.564077    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.356798   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: I0420 01:58:02.565075    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-348000" podUID="af4afa87-c484-4b73-9a4d-e86ddcd90049"
	I0419 18:59:09.356798   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.679358    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:09.356876   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.679588    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:04.67956768 +0000 UTC m=+9.839730247 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:09.356970   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.789713    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.356970   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.791860    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357054   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.792206    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:04.792183185 +0000 UTC m=+9.952345752 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357054   14960 command_runner.go:130] > Apr 20 01:58:03 multinode-348000 kubelet[1526]: E0420 01:58:03.170851    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.357125   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.169519    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.357125   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.700421    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:09.357216   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.700676    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:08.700644486 +0000 UTC m=+13.860807053 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:09.357216   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.801637    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357287   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.801751    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357287   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.801874    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:08.801835856 +0000 UTC m=+13.961998423 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357356   14960 command_runner.go:130] > Apr 20 01:58:05 multinode-348000 kubelet[1526]: E0420 01:58:05.169947    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.357424   14960 command_runner.go:130] > Apr 20 01:58:06 multinode-348000 kubelet[1526]: E0420 01:58:06.169499    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.357424   14960 command_runner.go:130] > Apr 20 01:58:07 multinode-348000 kubelet[1526]: E0420 01:58:07.170147    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.357516   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.169208    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.357516   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.751778    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:09.357585   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.752347    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:16.752328447 +0000 UTC m=+21.912491114 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:09.357614   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.852291    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.852347    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.852455    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:16.852435774 +0000 UTC m=+22.012598341 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:09 multinode-348000 kubelet[1526]: E0420 01:58:09.169017    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:10 multinode-348000 kubelet[1526]: E0420 01:58:10.169399    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:11 multinode-348000 kubelet[1526]: E0420 01:58:11.169467    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:12 multinode-348000 kubelet[1526]: E0420 01:58:12.169441    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:13 multinode-348000 kubelet[1526]: E0420 01:58:13.169983    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:14 multinode-348000 kubelet[1526]: E0420 01:58:14.169635    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:15 multinode-348000 kubelet[1526]: E0420 01:58:15.169488    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.169756    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.835157    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.835299    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:32.835279204 +0000 UTC m=+37.995441771 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.936116    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.936169    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.936232    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:32.936212581 +0000 UTC m=+38.096375148 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.357639   14960 command_runner.go:130] > Apr 20 01:58:17 multinode-348000 kubelet[1526]: E0420 01:58:17.169160    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.358214   14960 command_runner.go:130] > Apr 20 01:58:18 multinode-348000 kubelet[1526]: E0420 01:58:18.171760    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.358214   14960 command_runner.go:130] > Apr 20 01:58:19 multinode-348000 kubelet[1526]: E0420 01:58:19.169723    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.358214   14960 command_runner.go:130] > Apr 20 01:58:20 multinode-348000 kubelet[1526]: E0420 01:58:20.169542    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.358214   14960 command_runner.go:130] > Apr 20 01:58:21 multinode-348000 kubelet[1526]: E0420 01:58:21.169675    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:22 multinode-348000 kubelet[1526]: E0420 01:58:22.169364    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: E0420 01:58:23.169569    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: I0420 01:58:23.960680    1526 scope.go:117] "RemoveContainer" containerID="8a37c65d06fabf8d836ffb9a511bb6df5b549fa37051ef79f1f839076af60512"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: I0420 01:58:23.961154    1526 scope.go:117] "RemoveContainer" containerID="f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: E0420 01:58:23.961603    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kindnet-cni pod=kindnet-s4fsr_kube-system(46c91d5e-edfa-4254-a802-148047caeab5)\"" pod="kube-system/kindnet-s4fsr" podUID="46c91d5e-edfa-4254-a802-148047caeab5"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:24 multinode-348000 kubelet[1526]: E0420 01:58:24.169608    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:25 multinode-348000 kubelet[1526]: E0420 01:58:25.169976    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:26 multinode-348000 kubelet[1526]: E0420 01:58:26.169734    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:27 multinode-348000 kubelet[1526]: E0420 01:58:27.170054    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:28 multinode-348000 kubelet[1526]: E0420 01:58:28.169260    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:29 multinode-348000 kubelet[1526]: E0420 01:58:29.169306    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:30 multinode-348000 kubelet[1526]: E0420 01:58:30.169857    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:31 multinode-348000 kubelet[1526]: E0420 01:58:31.169543    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.169556    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.891318    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.891496    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:59:04.891477649 +0000 UTC m=+70.051640216 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.992269    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.358384   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.992577    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.358958   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.992723    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:59:04.992688767 +0000 UTC m=+70.152851434 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:09.358958   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: I0420 01:58:33.115355    1526 scope.go:117] "RemoveContainer" containerID="e248c230a4aa379bf469f41a95d1ea2033316d322a10b6da0ae06f656334b936"
	I0419 18:59:09.358958   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: I0420 01:58:33.115897    1526 scope.go:117] "RemoveContainer" containerID="45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702"
	I0419 18:59:09.359099   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: E0420 01:58:33.116183    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ffa0cfb9-91fb-4d5b-abe7-11992c731b74)\"" pod="kube-system/storage-provisioner" podUID="ffa0cfb9-91fb-4d5b-abe7-11992c731b74"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: E0420 01:58:33.169303    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:34 multinode-348000 kubelet[1526]: E0420 01:58:34.169175    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:35 multinode-348000 kubelet[1526]: E0420 01:58:35.169508    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 kubelet[1526]: E0420 01:58:36.169960    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 kubelet[1526]: I0420 01:58:36.170769    1526 scope.go:117] "RemoveContainer" containerID="f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:37 multinode-348000 kubelet[1526]: E0420 01:58:37.171433    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:38 multinode-348000 kubelet[1526]: E0420 01:58:38.169747    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:39 multinode-348000 kubelet[1526]: E0420 01:58:39.169252    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:40 multinode-348000 kubelet[1526]: E0420 01:58:40.169368    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:40 multinode-348000 kubelet[1526]: I0420 01:58:40.269590    1526 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 kubelet[1526]: I0420 01:58:45.169759    1526 scope.go:117] "RemoveContainer" containerID="45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]: I0420 01:58:55.162183    1526 scope.go:117] "RemoveContainer" containerID="490377504e57c3189163833390967e79bb80d222691d4402677feb6f25ed22f4"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]: I0420 01:58:55.206283    1526 scope.go:117] "RemoveContainer" containerID="53f6a00490766be2eb687e6fff052ca7a46ae16a0baf4551e956c81550d673b2"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]: E0420 01:58:55.212558    1526 iptables.go:577] "Could not set up iptables canary" err=<
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 kubelet[1526]: I0420 01:59:05.918992    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75ff9f4e9dde29a997e4321dd3659a2ce7d479a75826a78c4d3525f1eb5f696f"
	I0419 18:59:09.359129   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 kubelet[1526]: I0420 01:59:05.948376    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f28a1e746a9b438367a8e05d2e1a085afb4abec4174f7a7eb80549e02b95047a"
	I0419 18:59:09.402632   14960 logs.go:123] Gathering logs for kube-apiserver [bd3aa93bac25] ...
	I0419 18:59:09.402632   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd3aa93bac25"
	I0419 18:59:09.437496   14960 command_runner.go:130] ! I0420 01:57:57.501840       1 options.go:221] external host was not specified, using 172.19.42.24
	I0419 18:59:09.437570   14960 command_runner.go:130] ! I0420 01:57:57.505380       1 server.go:148] Version: v1.30.0
	I0419 18:59:09.438968   14960 command_runner.go:130] ! I0420 01:57:57.505690       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:09.439029   14960 command_runner.go:130] ! I0420 01:57:58.138487       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0419 18:59:09.439077   14960 command_runner.go:130] ! I0420 01:57:58.138530       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0419 18:59:09.439112   14960 command_runner.go:130] ! I0420 01:57:58.138987       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0419 18:59:09.439148   14960 command_runner.go:130] ! I0420 01:57:58.139098       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 18:59:09.439201   14960 command_runner.go:130] ! I0420 01:57:58.139890       1 instance.go:299] Using reconciler: lease
	I0419 18:59:09.439236   14960 command_runner.go:130] ! I0420 01:57:59.078678       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0419 18:59:09.439236   14960 command_runner.go:130] ! W0420 01:57:59.078889       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439266   14960 command_runner.go:130] ! I0420 01:57:59.354874       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.355339       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.630985       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.818361       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.834974       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.835019       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.835028       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.835870       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.835981       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.837241       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.838781       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.838919       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.838930       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.841133       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.841240       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.842492       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.842627       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.842640       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.843439       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.843519       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.843649       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.844516       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.847031       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.847132       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.847143       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.847848       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.847881       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.847889       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.849069       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.849173       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.851437       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.851563       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.851574       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:09.439293   14960 command_runner.go:130] ! I0420 01:57:59.852258       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0419 18:59:09.439293   14960 command_runner.go:130] ! W0420 01:57:59.852357       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439879   14960 command_runner.go:130] ! W0420 01:57:59.852367       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:09.439879   14960 command_runner.go:130] ! I0420 01:57:59.855318       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0419 18:59:09.439879   14960 command_runner.go:130] ! W0420 01:57:59.855413       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439879   14960 command_runner.go:130] ! W0420 01:57:59.855499       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:09.439879   14960 command_runner.go:130] ! I0420 01:57:59.857232       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0419 18:59:09.439879   14960 command_runner.go:130] ! I0420 01:57:59.859073       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0419 18:59:09.439879   14960 command_runner.go:130] ! W0420 01:57:59.859177       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0419 18:59:09.439879   14960 command_runner.go:130] ! W0420 01:57:59.859187       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.439879   14960 command_runner.go:130] ! I0420 01:57:59.866540       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0419 18:59:09.440024   14960 command_runner.go:130] ! W0420 01:57:59.866633       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0419 18:59:09.440024   14960 command_runner.go:130] ! W0420 01:57:59.866643       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:57:59.873672       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0419 18:59:09.440091   14960 command_runner.go:130] ! W0420 01:57:59.873814       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.440091   14960 command_runner.go:130] ! W0420 01:57:59.873827       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:57:59.875959       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0419 18:59:09.440091   14960 command_runner.go:130] ! W0420 01:57:59.875999       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:57:59.909243       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0419 18:59:09.440091   14960 command_runner.go:130] ! W0420 01:57:59.909284       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.597195       1 secure_serving.go:213] Serving securely on [::]:8443
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.597666       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.598134       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.597703       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.597737       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.600064       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.600948       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.601165       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.601445       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.602539       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.602852       1 aggregator.go:163] waiting for initial CRD sync...
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.603187       1 controller.go:78] Starting OpenAPI AggregationController
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.604023       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.604384       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.606631       1 available_controller.go:423] Starting AvailableConditionController
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.606857       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.607138       1 controller.go:116] Starting legacy_token_tracking_controller
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.607178       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.607325       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.607349       1 controller.go:139] Starting OpenAPI controller
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.607381       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.607407       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.607409       1 naming_controller.go:291] Starting NamingConditionController
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.607487       1 establishing_controller.go:76] Starting EstablishingController
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.607512       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0419 18:59:09.440091   14960 command_runner.go:130] ! I0420 01:58:00.607530       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0419 18:59:09.440629   14960 command_runner.go:130] ! I0420 01:58:00.607546       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0419 18:59:09.440629   14960 command_runner.go:130] ! I0420 01:58:00.608170       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0419 18:59:09.440629   14960 command_runner.go:130] ! I0420 01:58:00.608198       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0419 18:59:09.440629   14960 command_runner.go:130] ! I0420 01:58:00.608328       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:09.440629   14960 command_runner.go:130] ! I0420 01:58:00.608421       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:09.440629   14960 command_runner.go:130] ! I0420 01:58:00.607383       1 controller.go:87] Starting OpenAPI V3 controller
	I0419 18:59:09.440719   14960 command_runner.go:130] ! I0420 01:58:00.709605       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0419 18:59:09.440719   14960 command_runner.go:130] ! I0420 01:58:00.736531       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0419 18:59:09.440762   14960 command_runner.go:130] ! I0420 01:58:00.737086       1 shared_informer.go:320] Caches are synced for configmaps
	I0419 18:59:09.440798   14960 command_runner.go:130] ! I0420 01:58:00.737192       1 aggregator.go:165] initial CRD sync complete...
	I0419 18:59:09.440798   14960 command_runner.go:130] ! I0420 01:58:00.737219       1 autoregister_controller.go:141] Starting autoregister controller
	I0419 18:59:09.440798   14960 command_runner.go:130] ! I0420 01:58:00.737225       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0419 18:59:09.440798   14960 command_runner.go:130] ! I0420 01:58:00.737230       1 cache.go:39] Caches are synced for autoregister controller
	I0419 18:59:09.440798   14960 command_runner.go:130] ! I0420 01:58:00.740699       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 18:59:09.440877   14960 command_runner.go:130] ! I0420 01:58:00.741004       1 policy_source.go:224] refreshing policies
	I0419 18:59:09.440877   14960 command_runner.go:130] ! I0420 01:58:00.742672       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0419 18:59:09.440877   14960 command_runner.go:130] ! I0420 01:58:00.747054       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0419 18:59:09.440959   14960 command_runner.go:130] ! I0420 01:58:00.805770       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0419 18:59:09.440959   14960 command_runner.go:130] ! I0420 01:58:00.807460       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0419 18:59:09.440959   14960 command_runner.go:130] ! I0420 01:58:00.814456       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0419 18:59:09.440959   14960 command_runner.go:130] ! I0420 01:58:00.814490       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0419 18:59:09.441036   14960 command_runner.go:130] ! I0420 01:58:00.815844       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0419 18:59:09.441036   14960 command_runner.go:130] ! I0420 01:58:01.612010       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0419 18:59:09.441036   14960 command_runner.go:130] ! W0420 01:58:02.160618       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.42.231 172.19.42.24]
	I0419 18:59:09.441036   14960 command_runner.go:130] ! I0420 01:58:02.163332       1 controller.go:615] quota admission added evaluator for: endpoints
	I0419 18:59:09.441036   14960 command_runner.go:130] ! I0420 01:58:02.176968       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0419 18:59:09.441113   14960 command_runner.go:130] ! I0420 01:58:03.430204       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0419 18:59:09.441113   14960 command_runner.go:130] ! I0420 01:58:03.761410       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0419 18:59:09.441113   14960 command_runner.go:130] ! I0420 01:58:03.780335       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0419 18:59:09.441191   14960 command_runner.go:130] ! I0420 01:58:03.907022       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0419 18:59:09.441191   14960 command_runner.go:130] ! I0420 01:58:03.924019       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0419 18:59:09.441191   14960 command_runner.go:130] ! W0420 01:58:22.143512       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.42.24]
	I0419 18:59:09.449271   14960 logs.go:123] Gathering logs for kube-scheduler [e476774b8f77] ...
	I0419 18:59:09.449271   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e476774b8f77"
	I0419 18:59:09.482628   14960 command_runner.go:130] ! I0420 01:35:03.474569       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:09.483133   14960 command_runner.go:130] ! W0420 01:35:04.965330       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0419 18:59:09.483133   14960 command_runner.go:130] ! W0420 01:35:04.965379       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:09.483357   14960 command_runner.go:130] ! W0420 01:35:04.965392       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0419 18:59:09.483391   14960 command_runner.go:130] ! W0420 01:35:04.965399       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0419 18:59:09.483476   14960 command_runner.go:130] ! I0420 01:35:05.040739       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0419 18:59:09.483476   14960 command_runner.go:130] ! I0420 01:35:05.040800       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:09.483476   14960 command_runner.go:130] ! I0420 01:35:05.044777       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0419 18:59:09.483476   14960 command_runner.go:130] ! I0420 01:35:05.045192       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 18:59:09.483556   14960 command_runner.go:130] ! I0420 01:35:05.045423       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:09.483556   14960 command_runner.go:130] ! I0420 01:35:05.046180       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:09.483609   14960 command_runner.go:130] ! W0420 01:35:05.063208       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:09.483609   14960 command_runner.go:130] ! E0420 01:35:05.064240       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:09.483676   14960 command_runner.go:130] ! W0420 01:35:05.063609       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.483676   14960 command_runner.go:130] ! E0420 01:35:05.065130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.483731   14960 command_runner.go:130] ! W0420 01:35:05.063676       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! E0420 01:35:05.065433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! W0420 01:35:05.063732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! E0420 01:35:05.065801       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! W0420 01:35:05.063780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! E0420 01:35:05.066820       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! W0420 01:35:05.063927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! E0420 01:35:05.067122       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! W0420 01:35:05.063973       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! E0420 01:35:05.069517       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! W0420 01:35:05.064025       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! E0420 01:35:05.069884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! W0420 01:35:05.064095       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! E0420 01:35:05.070309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! W0420 01:35:05.064163       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! E0420 01:35:05.070884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! W0420 01:35:05.070236       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:09.483762   14960 command_runner.go:130] ! E0420 01:35:05.071293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! W0420 01:35:05.070677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! E0420 01:35:05.072125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! W0420 01:35:05.070741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! E0420 01:35:05.073528       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! W0420 01:35:05.072410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! E0420 01:35:05.073910       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! W0420 01:35:05.072540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! E0420 01:35:05.074332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! W0420 01:35:05.987809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! E0420 01:35:05.988072       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! W0420 01:35:06.078924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! E0420 01:35:06.079045       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! W0420 01:35:06.146102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! E0420 01:35:06.146225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! W0420 01:35:06.213142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! E0420 01:35:06.213279       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.484381   14960 command_runner.go:130] ! W0420 01:35:06.278808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.484979   14960 command_runner.go:130] ! E0420 01:35:06.279232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.484979   14960 command_runner.go:130] ! W0420 01:35:06.310265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:09.484979   14960 command_runner.go:130] ! E0420 01:35:06.311126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:09.484979   14960 command_runner.go:130] ! W0420 01:35:06.333128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:09.485195   14960 command_runner.go:130] ! E0420 01:35:06.333531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:09.485195   14960 command_runner.go:130] ! W0420 01:35:06.355993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:09.485195   14960 command_runner.go:130] ! E0420 01:35:06.356053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:09.485195   14960 command_runner.go:130] ! W0420 01:35:06.356154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:09.485195   14960 command_runner.go:130] ! E0420 01:35:06.356365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:09.485195   14960 command_runner.go:130] ! W0420 01:35:06.490128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:09.485361   14960 command_runner.go:130] ! E0420 01:35:06.490240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:09.485361   14960 command_runner.go:130] ! W0420 01:35:06.496247       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:09.485458   14960 command_runner.go:130] ! E0420 01:35:06.496709       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:09.485458   14960 command_runner.go:130] ! W0420 01:35:06.552817       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.485538   14960 command_runner.go:130] ! E0420 01:35:06.552917       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.485538   14960 command_runner.go:130] ! W0420 01:35:06.607496       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.485593   14960 command_runner.go:130] ! E0420 01:35:06.607914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:09.485666   14960 command_runner.go:130] ! W0420 01:35:06.608255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:09.485713   14960 command_runner.go:130] ! E0420 01:35:06.608488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:09.485713   14960 command_runner.go:130] ! W0420 01:35:06.623642       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:09.485713   14960 command_runner.go:130] ! E0420 01:35:06.624029       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:09.485713   14960 command_runner.go:130] ! I0420 01:35:09.746203       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:09.485827   14960 command_runner.go:130] ! I0420 01:55:30.893306       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0419 18:59:09.485827   14960 command_runner.go:130] ! I0420 01:55:30.893359       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0419 18:59:09.485827   14960 command_runner.go:130] ! I0420 01:55:30.893732       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 18:59:09.485827   14960 command_runner.go:130] ! E0420 01:55:30.894682       1 run.go:74] "command failed" err="finished without leader elect"
	I0419 18:59:09.497120   14960 logs.go:123] Gathering logs for kindnet [ae0b21715f86] ...
	I0419 18:59:09.497120   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0b21715f86"
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:36.715209       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:36.715359       1 main.go:107] hostIP = 172.19.42.24
	I0419 18:59:09.524085   14960 command_runner.go:130] ! podIP = 172.19.42.24
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:36.715480       1 main.go:116] setting mtu 1500 for CNI 
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:36.715877       1 main.go:146] kindnetd IP family: "ipv4"
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:36.806023       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:37.413197       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:37.413291       1 main.go:227] handling current node
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:37.413685       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:37.413745       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:37.414005       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.19.32.249 Flags: [] Table: 0} 
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:37.506308       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:37.506405       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:37.506676       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.19.37.59 Flags: [] Table: 0} 
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:47.525508       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:47.525608       1 main.go:227] handling current node
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:47.525629       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:47.525638       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:47.526101       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:47.526135       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:57.538448       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:57.538834       1 main.go:227] handling current node
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:57.538899       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:57.538926       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:57.539176       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:58:57.539274       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:59:07.555783       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:59:07.555932       1 main.go:227] handling current node
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:59:07.556426       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:59:07.556438       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:59:07.556563       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:09.524085   14960 command_runner.go:130] ! I0420 01:59:07.556590       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:09.529572   14960 logs.go:123] Gathering logs for kindnet [f8c798c99407] ...
	I0419 18:59:09.529700   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c798c99407"
	I0419 18:59:09.557319   14960 command_runner.go:130] ! I0420 01:58:03.441751       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0419 18:59:09.557417   14960 command_runner.go:130] ! I0420 01:58:03.511070       1 main.go:107] hostIP = 172.19.42.24
	I0419 18:59:09.557417   14960 command_runner.go:130] ! podIP = 172.19.42.24
	I0419 18:59:09.557417   14960 command_runner.go:130] ! I0420 01:58:03.513110       1 main.go:116] setting mtu 1500 for CNI 
	I0419 18:59:09.557417   14960 command_runner.go:130] ! I0420 01:58:03.513147       1 main.go:146] kindnetd IP family: "ipv4"
	I0419 18:59:09.557417   14960 command_runner.go:130] ! I0420 01:58:03.513182       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0419 18:59:09.557417   14960 command_runner.go:130] ! I0420 01:58:07.011650       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:09.557573   14960 command_runner.go:130] ! I0420 01:58:10.084231       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:09.557573   14960 command_runner.go:130] ! I0420 01:58:13.156371       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:09.557573   14960 command_runner.go:130] ! I0420 01:58:16.227521       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:09.557573   14960 command_runner.go:130] ! I0420 01:58:19.299385       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:09.557573   14960 command_runner.go:130] ! panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:09.557573   14960 command_runner.go:130] ! goroutine 1 [running]:
	I0419 18:59:09.557573   14960 command_runner.go:130] ! main.main()
	I0419 18:59:09.557745   14960 command_runner.go:130] ! 	/go/src/cmd/kindnetd/main.go:195 +0xd3d
	I0419 18:59:09.560359   14960 logs.go:123] Gathering logs for describe nodes ...
	I0419 18:59:09.560429   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 18:59:09.801734   14960 command_runner.go:130] > Name:               multinode-348000
	I0419 18:59:09.802729   14960 command_runner.go:130] > Roles:              control-plane
	I0419 18:59:09.802729   14960 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0419 18:59:09.802781   14960 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0419 18:59:09.802781   14960 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0419 18:59:09.802781   14960 command_runner.go:130] >                     kubernetes.io/hostname=multinode-348000
	I0419 18:59:09.802781   14960 command_runner.go:130] >                     kubernetes.io/os=linux
	I0419 18:59:09.802781   14960 command_runner.go:130] >                     minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	I0419 18:59:09.802781   14960 command_runner.go:130] >                     minikube.k8s.io/name=multinode-348000
	I0419 18:59:09.802781   14960 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0419 18:59:09.802781   14960 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_04_19T18_35_09_0700
	I0419 18:59:09.802868   14960 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0419 18:59:09.802868   14960 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0419 18:59:09.802868   14960 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0419 18:59:09.802914   14960 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0419 18:59:09.802914   14960 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0419 18:59:09.802914   14960 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0419 18:59:09.802914   14960 command_runner.go:130] > CreationTimestamp:  Sat, 20 Apr 2024 01:35:05 +0000
	I0419 18:59:09.802914   14960 command_runner.go:130] > Taints:             <none>
	I0419 18:59:09.802914   14960 command_runner.go:130] > Unschedulable:      false
	I0419 18:59:09.802914   14960 command_runner.go:130] > Lease:
	I0419 18:59:09.802914   14960 command_runner.go:130] >   HolderIdentity:  multinode-348000
	I0419 18:59:09.802914   14960 command_runner.go:130] >   AcquireTime:     <unset>
	I0419 18:59:09.802914   14960 command_runner.go:130] >   RenewTime:       Sat, 20 Apr 2024 01:59:01 +0000
	I0419 18:59:09.802914   14960 command_runner.go:130] > Conditions:
	I0419 18:59:09.802914   14960 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0419 18:59:09.802914   14960 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0419 18:59:09.802914   14960 command_runner.go:130] >   MemoryPressure   False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0419 18:59:09.802914   14960 command_runner.go:130] >   DiskPressure     False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0419 18:59:09.802914   14960 command_runner.go:130] >   PIDPressure      False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0419 18:59:09.802914   14960 command_runner.go:130] >   Ready            True    Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:58:40 +0000   KubeletReady                 kubelet is posting ready status
	I0419 18:59:09.802914   14960 command_runner.go:130] > Addresses:
	I0419 18:59:09.802914   14960 command_runner.go:130] >   InternalIP:  172.19.42.24
	I0419 18:59:09.802914   14960 command_runner.go:130] >   Hostname:    multinode-348000
	I0419 18:59:09.802914   14960 command_runner.go:130] > Capacity:
	I0419 18:59:09.803499   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:09.803499   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:09.803499   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:09.803499   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:09.803499   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:09.803499   14960 command_runner.go:130] > Allocatable:
	I0419 18:59:09.803499   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:09.803499   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:09.803499   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:09.803499   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:09.803499   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:09.803499   14960 command_runner.go:130] > System Info:
	I0419 18:59:09.803499   14960 command_runner.go:130] >   Machine ID:                 bd21fc8af31a4161a4396c16b70a2fc3
	I0419 18:59:09.803637   14960 command_runner.go:130] >   System UUID:                fdc3fb6e-1818-9a4e-b496-b7ed0124a8e6
	I0419 18:59:09.803637   14960 command_runner.go:130] >   Boot ID:                    047b982b-9f97-4a1a-8f8a-a308f369753b
	I0419 18:59:09.803637   14960 command_runner.go:130] >   Kernel Version:             5.10.207
	I0419 18:59:09.803637   14960 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0419 18:59:09.803637   14960 command_runner.go:130] >   Operating System:           linux
	I0419 18:59:09.803637   14960 command_runner.go:130] >   Architecture:               amd64
	I0419 18:59:09.803637   14960 command_runner.go:130] >   Container Runtime Version:  docker://26.0.1
	I0419 18:59:09.803637   14960 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0419 18:59:09.803734   14960 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0419 18:59:09.803734   14960 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0419 18:59:09.803734   14960 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0419 18:59:09.803734   14960 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0419 18:59:09.803734   14960 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0419 18:59:09.803734   14960 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0419 18:59:09.803734   14960 command_runner.go:130] >   default                     busybox-fc5497c4f-xnz2k                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0419 18:59:09.803854   14960 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-7w477                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	I0419 18:59:09.803854   14960 command_runner.go:130] >   kube-system                 etcd-multinode-348000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         68s
	I0419 18:59:09.803854   14960 command_runner.go:130] >   kube-system                 kindnet-s4fsr                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	I0419 18:59:09.803854   14960 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-348000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	I0419 18:59:09.803854   14960 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-348000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0419 18:59:09.803854   14960 command_runner.go:130] >   kube-system                 kube-proxy-kj76x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0419 18:59:09.803974   14960 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-348000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0419 18:59:09.803974   14960 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0419 18:59:09.803974   14960 command_runner.go:130] > Allocated resources:
	I0419 18:59:09.803974   14960 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0419 18:59:09.803974   14960 command_runner.go:130] >   Resource           Requests     Limits
	I0419 18:59:09.803974   14960 command_runner.go:130] >   --------           --------     ------
	I0419 18:59:09.804062   14960 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0419 18:59:09.804062   14960 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0419 18:59:09.804062   14960 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0419 18:59:09.804062   14960 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0419 18:59:09.804062   14960 command_runner.go:130] > Events:
	I0419 18:59:09.804062   14960 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0419 18:59:09.804062   14960 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0419 18:59:09.804144   14960 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0419 18:59:09.804144   14960 command_runner.go:130] >   Normal  Starting                 66s                kube-proxy       
	I0419 18:59:09.804144   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-348000 status is now: NodeHasSufficientPID
	I0419 18:59:09.804144   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:09.804223   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-348000 status is now: NodeHasSufficientMemory
	I0419 18:59:09.804223   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-348000 status is now: NodeHasNoDiskPressure
	I0419 18:59:09.804223   14960 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0419 18:59:09.804223   14960 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-348000 event: Registered Node multinode-348000 in Controller
	I0419 18:59:09.804302   14960 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-348000 status is now: NodeReady
	I0419 18:59:09.804302   14960 command_runner.go:130] >   Normal  Starting                 74s                kubelet          Starting kubelet.
	I0419 18:59:09.804302   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node multinode-348000 status is now: NodeHasSufficientMemory
	I0419 18:59:09.804302   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node multinode-348000 status is now: NodeHasNoDiskPressure
	I0419 18:59:09.804302   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node multinode-348000 status is now: NodeHasSufficientPID
	I0419 18:59:09.804383   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:09.804383   14960 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-348000 event: Registered Node multinode-348000 in Controller
	I0419 18:59:09.804383   14960 command_runner.go:130] > Name:               multinode-348000-m02
	I0419 18:59:09.804383   14960 command_runner.go:130] > Roles:              <none>
	I0419 18:59:09.804383   14960 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0419 18:59:09.804383   14960 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0419 18:59:09.804383   14960 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0419 18:59:09.804383   14960 command_runner.go:130] >                     kubernetes.io/hostname=multinode-348000-m02
	I0419 18:59:09.804383   14960 command_runner.go:130] >                     kubernetes.io/os=linux
	I0419 18:59:09.804465   14960 command_runner.go:130] >                     minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	I0419 18:59:09.804465   14960 command_runner.go:130] >                     minikube.k8s.io/name=multinode-348000
	I0419 18:59:09.804499   14960 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0419 18:59:09.804499   14960 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_04_19T18_38_19_0700
	I0419 18:59:09.804528   14960 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0419 18:59:09.804528   14960 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0419 18:59:09.804528   14960 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0419 18:59:09.804528   14960 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0419 18:59:09.804528   14960 command_runner.go:130] > CreationTimestamp:  Sat, 20 Apr 2024 01:38:18 +0000
	I0419 18:59:09.804528   14960 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0419 18:59:09.804528   14960 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0419 18:59:09.804528   14960 command_runner.go:130] > Unschedulable:      false
	I0419 18:59:09.804528   14960 command_runner.go:130] > Lease:
	I0419 18:59:09.804528   14960 command_runner.go:130] >   HolderIdentity:  multinode-348000-m02
	I0419 18:59:09.804528   14960 command_runner.go:130] >   AcquireTime:     <unset>
	I0419 18:59:09.804528   14960 command_runner.go:130] >   RenewTime:       Sat, 20 Apr 2024 01:54:49 +0000
	I0419 18:59:09.804528   14960 command_runner.go:130] > Conditions:
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0419 18:59:09.804528   14960 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0419 18:59:09.804528   14960 command_runner.go:130] >   MemoryPressure   Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:09.804528   14960 command_runner.go:130] >   DiskPressure     Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:09.804528   14960 command_runner.go:130] >   PIDPressure      Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Ready            Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:09.804528   14960 command_runner.go:130] > Addresses:
	I0419 18:59:09.804528   14960 command_runner.go:130] >   InternalIP:  172.19.32.249
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Hostname:    multinode-348000-m02
	I0419 18:59:09.804528   14960 command_runner.go:130] > Capacity:
	I0419 18:59:09.804528   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:09.804528   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:09.804528   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:09.804528   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:09.804528   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:09.804528   14960 command_runner.go:130] > Allocatable:
	I0419 18:59:09.804528   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:09.804528   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:09.804528   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:09.804528   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:09.804528   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:09.804528   14960 command_runner.go:130] > System Info:
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Machine ID:                 ea453a3100b34d789441206109708446
	I0419 18:59:09.804528   14960 command_runner.go:130] >   System UUID:                9f7972f9-8942-ef4f-b0cf-029b405f5832
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Boot ID:                    d8ef37df-1396-47c1-8bea-04667e5bc60b
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Kernel Version:             5.10.207
	I0419 18:59:09.804528   14960 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Operating System:           linux
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Architecture:               amd64
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Container Runtime Version:  docker://26.0.1
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0419 18:59:09.804528   14960 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0419 18:59:09.804528   14960 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0419 18:59:09.804528   14960 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0419 18:59:09.805135   14960 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0419 18:59:09.805135   14960 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0419 18:59:09.805135   14960 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0419 18:59:09.805135   14960 command_runner.go:130] >   default                     busybox-fc5497c4f-2d5hs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0419 18:59:09.805135   14960 command_runner.go:130] >   kube-system                 kindnet-s98rh              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	I0419 18:59:09.805135   14960 command_runner.go:130] >   kube-system                 kube-proxy-bjv9b           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0419 18:59:09.805135   14960 command_runner.go:130] > Allocated resources:
	I0419 18:59:09.805135   14960 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0419 18:59:09.805135   14960 command_runner.go:130] >   Resource           Requests   Limits
	I0419 18:59:09.805288   14960 command_runner.go:130] >   --------           --------   ------
	I0419 18:59:09.805288   14960 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0419 18:59:09.805288   14960 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0419 18:59:09.805322   14960 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0419 18:59:09.805322   14960 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0419 18:59:09.805322   14960 command_runner.go:130] > Events:
	I0419 18:59:09.805374   14960 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0419 18:59:09.805374   14960 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0419 18:59:09.805409   14960 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0419 18:59:09.805439   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-348000-m02 status is now: NodeHasSufficientMemory
	I0419 18:59:09.805439   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-348000-m02 status is now: NodeHasNoDiskPressure
	I0419 18:59:09.805477   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-348000-m02 status is now: NodeHasSufficientPID
	I0419 18:59:09.805477   14960 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-348000-m02 event: Registered Node multinode-348000-m02 in Controller
	I0419 18:59:09.805477   14960 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-348000-m02 status is now: NodeReady
	I0419 18:59:09.805477   14960 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-348000-m02 event: Registered Node multinode-348000-m02 in Controller
	I0419 18:59:09.805558   14960 command_runner.go:130] >   Normal  NodeNotReady             16s                node-controller  Node multinode-348000-m02 status is now: NodeNotReady
	I0419 18:59:09.805558   14960 command_runner.go:130] > Name:               multinode-348000-m03
	I0419 18:59:09.805558   14960 command_runner.go:130] > Roles:              <none>
	I0419 18:59:09.805558   14960 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0419 18:59:09.805558   14960 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0419 18:59:09.805558   14960 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0419 18:59:09.805639   14960 command_runner.go:130] >                     kubernetes.io/hostname=multinode-348000-m03
	I0419 18:59:09.805639   14960 command_runner.go:130] >                     kubernetes.io/os=linux
	I0419 18:59:09.805639   14960 command_runner.go:130] >                     minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	I0419 18:59:09.805639   14960 command_runner.go:130] >                     minikube.k8s.io/name=multinode-348000
	I0419 18:59:09.805639   14960 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0419 18:59:09.805639   14960 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_04_19T18_53_29_0700
	I0419 18:59:09.805639   14960 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0419 18:59:09.805736   14960 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0419 18:59:09.805736   14960 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0419 18:59:09.805736   14960 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0419 18:59:09.805736   14960 command_runner.go:130] > CreationTimestamp:  Sat, 20 Apr 2024 01:53:28 +0000
	I0419 18:59:09.805736   14960 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0419 18:59:09.805736   14960 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0419 18:59:09.805736   14960 command_runner.go:130] > Unschedulable:      false
	I0419 18:59:09.805736   14960 command_runner.go:130] > Lease:
	I0419 18:59:09.805853   14960 command_runner.go:130] >   HolderIdentity:  multinode-348000-m03
	I0419 18:59:09.805853   14960 command_runner.go:130] >   AcquireTime:     <unset>
	I0419 18:59:09.805853   14960 command_runner.go:130] >   RenewTime:       Sat, 20 Apr 2024 01:54:29 +0000
	I0419 18:59:09.805853   14960 command_runner.go:130] > Conditions:
	I0419 18:59:09.805853   14960 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0419 18:59:09.805853   14960 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0419 18:59:09.805853   14960 command_runner.go:130] >   MemoryPressure   Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:09.805853   14960 command_runner.go:130] >   DiskPressure     Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:09.805971   14960 command_runner.go:130] >   PIDPressure      Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:09.805971   14960 command_runner.go:130] >   Ready            Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:09.805971   14960 command_runner.go:130] > Addresses:
	I0419 18:59:09.805971   14960 command_runner.go:130] >   InternalIP:  172.19.37.59
	I0419 18:59:09.805971   14960 command_runner.go:130] >   Hostname:    multinode-348000-m03
	I0419 18:59:09.805971   14960 command_runner.go:130] > Capacity:
	I0419 18:59:09.805971   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:09.805971   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:09.806065   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:09.806065   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:09.806090   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:09.806090   14960 command_runner.go:130] > Allocatable:
	I0419 18:59:09.806090   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:09.806090   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:09.806090   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:09.806137   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:09.806137   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:09.806137   14960 command_runner.go:130] > System Info:
	I0419 18:59:09.806137   14960 command_runner.go:130] >   Machine ID:                 02e45e9bf03f4852a443a43ac6a8538b
	I0419 18:59:09.806137   14960 command_runner.go:130] >   System UUID:                37a43d59-2157-6e44-8d13-6c975ea12fea
	I0419 18:59:09.806137   14960 command_runner.go:130] >   Boot ID:                    404bc64b-d4fc-4c63-a589-8191649bdfaa
	I0419 18:59:09.806201   14960 command_runner.go:130] >   Kernel Version:             5.10.207
	I0419 18:59:09.806201   14960 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0419 18:59:09.806201   14960 command_runner.go:130] >   Operating System:           linux
	I0419 18:59:09.806201   14960 command_runner.go:130] >   Architecture:               amd64
	I0419 18:59:09.806271   14960 command_runner.go:130] >   Container Runtime Version:  docker://26.0.1
	I0419 18:59:09.806271   14960 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0419 18:59:09.806271   14960 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0419 18:59:09.806271   14960 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0419 18:59:09.806271   14960 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0419 18:59:09.806333   14960 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0419 18:59:09.806333   14960 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0419 18:59:09.806333   14960 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0419 18:59:09.806333   14960 command_runner.go:130] >   kube-system                 kindnet-mg8qs       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	I0419 18:59:09.806410   14960 command_runner.go:130] >   kube-system                 kube-proxy-2jjsq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	I0419 18:59:09.806410   14960 command_runner.go:130] > Allocated resources:
	I0419 18:59:09.806410   14960 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0419 18:59:09.806410   14960 command_runner.go:130] >   Resource           Requests   Limits
	I0419 18:59:09.806410   14960 command_runner.go:130] >   --------           --------   ------
	I0419 18:59:09.806410   14960 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0419 18:59:09.806469   14960 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0419 18:59:09.806469   14960 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0419 18:59:09.806469   14960 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0419 18:59:09.806469   14960 command_runner.go:130] > Events:
	I0419 18:59:09.806469   14960 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0419 18:59:09.806536   14960 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0419 18:59:09.806536   14960 command_runner.go:130] >   Normal  Starting                 5m37s                  kube-proxy       
	I0419 18:59:09.806536   14960 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0419 18:59:09.806595   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:09.806595   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientMemory
	I0419 18:59:09.806674   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-348000-m03 status is now: NodeHasNoDiskPressure
	I0419 18:59:09.806674   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientPID
	I0419 18:59:09.806674   14960 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-348000-m03 status is now: NodeReady
	I0419 18:59:09.806674   14960 command_runner.go:130] >   Normal  Starting                 5m41s                  kubelet          Starting kubelet.
	I0419 18:59:09.806737   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m41s (x2 over 5m41s)  kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientMemory
	I0419 18:59:09.806737   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m41s (x2 over 5m41s)  kubelet          Node multinode-348000-m03 status is now: NodeHasNoDiskPressure
	I0419 18:59:09.806737   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m41s (x2 over 5m41s)  kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientPID
	I0419 18:59:09.806805   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m41s                  kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:09.806805   14960 command_runner.go:130] >   Normal  RegisteredNode           5m37s                  node-controller  Node multinode-348000-m03 event: Registered Node multinode-348000-m03 in Controller
	I0419 18:59:09.806805   14960 command_runner.go:130] >   Normal  NodeReady                5m33s                  kubelet          Node multinode-348000-m03 status is now: NodeReady
	I0419 18:59:09.806874   14960 command_runner.go:130] >   Normal  NodeNotReady             3m56s                  node-controller  Node multinode-348000-m03 status is now: NodeNotReady
	I0419 18:59:09.806874   14960 command_runner.go:130] >   Normal  RegisteredNode           56s                    node-controller  Node multinode-348000-m03 event: Registered Node multinode-348000-m03 in Controller
	I0419 18:59:09.817673   14960 logs.go:123] Gathering logs for etcd [2deabe4dbdf4] ...
	I0419 18:59:09.817673   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2deabe4dbdf4"
	I0419 18:59:09.859066   14960 command_runner.go:130] ! {"level":"warn","ts":"2024-04-20T01:57:57.046906Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0419 18:59:09.860059   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.051203Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.19.42.24:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.19.42.24:2380","--initial-cluster=multinode-348000=https://172.19.42.24:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.19.42.24:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.19.42.24:2380","--name=multinode-348000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-ref
resh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0419 18:59:09.860119   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.05132Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0419 18:59:09.860235   14960 command_runner.go:130] ! {"level":"warn","ts":"2024-04-20T01:57:57.053068Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0419 18:59:09.860235   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.053085Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.19.42.24:2380"]}
	I0419 18:59:09.860295   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.053402Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0419 18:59:09.860347   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.06821Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.19.42.24:2379"]}
	I0419 18:59:09.860481   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.071769Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-348000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.19.42.24:2380"],"listen-peer-urls":["https://172.19.42.24:2380"],"advertise-client-urls":["https://172.19.42.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.42.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cl
uster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0419 18:59:09.860549   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.117145Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"37.959314ms"}
	I0419 18:59:09.860549   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.163657Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0419 18:59:09.860549   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186114Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","commit-index":1996}
	I0419 18:59:09.860615   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c switched to configuration voters=()"}
	I0419 18:59:09.860673   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became follower at term 2"}
	I0419 18:59:09.860673   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 4fba18389b33806c [peers: [], term: 2, commit: 1996, applied: 0, lastindex: 1996, lastterm: 2]"}
	I0419 18:59:09.860673   14960 command_runner.go:130] ! {"level":"warn","ts":"2024-04-20T01:57:57.204366Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0419 18:59:09.860741   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.210889Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1364}
	I0419 18:59:09.860741   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.22333Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1726}
	I0419 18:59:09.860741   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.233905Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0419 18:59:09.860811   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.247902Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"4fba18389b33806c","timeout":"7s"}
	I0419 18:59:09.860811   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.252957Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"4fba18389b33806c"}
	I0419 18:59:09.860879   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.253239Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"4fba18389b33806c","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0419 18:59:09.860879   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.257675Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0419 18:59:09.860879   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.259962Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0419 18:59:09.860963   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.260237Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0419 18:59:09.860963   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.26046Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0419 18:59:09.860963   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c switched to configuration voters=(5744930906065567852)"}
	I0419 18:59:09.861029   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264281Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","added-peer-id":"4fba18389b33806c","added-peer-peer-urls":["https://172.19.42.231:2380"]}
	I0419 18:59:09.861098   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264439Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","cluster-version":"3.5"}
	I0419 18:59:09.861098   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264612Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.271976Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.273753Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4fba18389b33806c","initial-advertise-peer-urls":["https://172.19.42.24:2380"],"listen-peer-urls":["https://172.19.42.24:2380"],"advertise-client-urls":["https://172.19.42.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.42.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.27526Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.27622Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.42.24:2380"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.277207Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.42.24:2380"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c is starting a new election at term 2"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became pre-candidate at term 2"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c received MsgPreVoteResp from 4fba18389b33806c at term 2"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became candidate at term 3"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c received MsgVoteResp from 4fba18389b33806c at term 3"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became leader at term 3"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4fba18389b33806c elected leader 4fba18389b33806c at term 3"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.994477Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4fba18389b33806c","local-member-attributes":"{Name:multinode-348000 ClientURLs:[https://172.19.42.24:2379]}","request-path":"/0/members/4fba18389b33806c/attributes","cluster-id":"dca2ede42d67bc1c","publish-timeout":"7s"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.994493Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.994512Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.996572Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.996617Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.999043Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.42.24:2379"}
	I0419 18:59:09.861175   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.999341Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0419 18:59:09.869988   14960 logs.go:123] Gathering logs for kube-controller-manager [9638ddcd5428] ...
	I0419 18:59:09.869988   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9638ddcd5428"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:03.372734       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:03.812267       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:03.812307       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:03.816347       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:03.816460       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:03.817145       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:03.817250       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:07.961997       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:07.962027       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:07.977942       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:07.978602       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:07.980093       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:07.989698       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:07.990033       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:07.990321       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:08.005238       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:08.005791       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:08.006985       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:08.018816       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:08.019229       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:08.019480       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:08.046904       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:08.047815       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0419 18:59:09.910489   14960 command_runner.go:130] ! I0420 01:35:08.049696       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0419 18:59:09.911485   14960 command_runner.go:130] ! I0420 01:35:08.050007       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0419 18:59:09.911485   14960 command_runner.go:130] ! I0420 01:35:08.062049       1 shared_informer.go:320] Caches are synced for tokens
	I0419 18:59:09.911485   14960 command_runner.go:130] ! I0420 01:35:08.065356       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0419 18:59:09.911485   14960 command_runner.go:130] ! I0420 01:35:08.065873       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0419 18:59:09.911485   14960 command_runner.go:130] ! I0420 01:35:08.113476       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0419 18:59:09.911485   14960 command_runner.go:130] ! I0420 01:35:08.114130       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0419 18:59:09.912536   14960 command_runner.go:130] ! I0420 01:35:08.116086       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0419 18:59:09.912536   14960 command_runner.go:130] ! I0420 01:35:08.129157       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0419 18:59:09.912536   14960 command_runner.go:130] ! I0420 01:35:08.129533       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0419 18:59:09.912536   14960 command_runner.go:130] ! I0420 01:35:08.129568       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0419 18:59:09.912536   14960 command_runner.go:130] ! I0420 01:35:08.165596       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0419 18:59:09.912856   14960 command_runner.go:130] ! I0420 01:35:08.166223       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0419 18:59:09.913433   14960 command_runner.go:130] ! I0420 01:35:08.166242       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0419 18:59:09.913533   14960 command_runner.go:130] ! I0420 01:35:08.211668       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0419 18:59:09.914336   14960 command_runner.go:130] ! I0420 01:35:08.211749       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0419 18:59:09.914473   14960 command_runner.go:130] ! I0420 01:35:08.211766       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0419 18:59:09.914473   14960 command_runner.go:130] ! I0420 01:35:08.232421       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:09.914473   14960 command_runner.go:130] ! I0420 01:35:08.232496       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0419 18:59:09.914541   14960 command_runner.go:130] ! I0420 01:35:08.232934       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:09.914541   14960 command_runner.go:130] ! I0420 01:35:08.232991       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0419 18:59:09.914541   14960 command_runner.go:130] ! I0420 01:35:08.502058       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0419 18:59:09.914541   14960 command_runner.go:130] ! I0420 01:35:08.502113       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0419 18:59:09.914622   14960 command_runner.go:130] ! W0420 01:35:08.502140       1 shared_informer.go:597] resyncPeriod 21h44m16.388395173s is smaller than resyncCheckPeriod 22h35m59.940993284s and the informer has already started. Changing it to 22h35m59.940993284s
	I0419 18:59:09.914622   14960 command_runner.go:130] ! I0420 01:35:08.502208       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0419 18:59:09.914622   14960 command_runner.go:130] ! I0420 01:35:08.502278       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0419 18:59:09.914702   14960 command_runner.go:130] ! I0420 01:35:08.502298       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0419 18:59:09.914702   14960 command_runner.go:130] ! I0420 01:35:08.502314       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0419 18:59:09.914702   14960 command_runner.go:130] ! I0420 01:35:08.502330       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0419 18:59:09.914702   14960 command_runner.go:130] ! I0420 01:35:08.502351       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0419 18:59:09.914781   14960 command_runner.go:130] ! I0420 01:35:08.502407       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0419 18:59:09.914781   14960 command_runner.go:130] ! I0420 01:35:08.502437       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0419 18:59:09.914781   14960 command_runner.go:130] ! I0420 01:35:08.502458       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0419 18:59:09.914859   14960 command_runner.go:130] ! I0420 01:35:08.502479       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0419 18:59:09.914859   14960 command_runner.go:130] ! I0420 01:35:08.502501       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0419 18:59:09.914938   14960 command_runner.go:130] ! W0420 01:35:08.502514       1 shared_informer.go:597] resyncPeriod 19h4m59.465157498s is smaller than resyncCheckPeriod 22h35m59.940993284s and the informer has already started. Changing it to 22h35m59.940993284s
	I0419 18:59:09.914938   14960 command_runner.go:130] ! I0420 01:35:08.502638       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0419 18:59:09.914938   14960 command_runner.go:130] ! I0420 01:35:08.502666       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0419 18:59:09.915016   14960 command_runner.go:130] ! I0420 01:35:08.502684       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0419 18:59:09.915016   14960 command_runner.go:130] ! I0420 01:35:08.502713       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0419 18:59:09.915016   14960 command_runner.go:130] ! I0420 01:35:08.502732       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0419 18:59:09.915094   14960 command_runner.go:130] ! I0420 01:35:08.502771       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0419 18:59:09.915094   14960 command_runner.go:130] ! I0420 01:35:08.502793       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0419 18:59:09.915094   14960 command_runner.go:130] ! I0420 01:35:08.502820       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0419 18:59:09.915186   14960 command_runner.go:130] ! I0420 01:35:08.503928       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0419 18:59:09.915186   14960 command_runner.go:130] ! I0420 01:35:08.503949       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:09.915186   14960 command_runner.go:130] ! I0420 01:35:08.504053       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0419 18:59:09.915186   14960 command_runner.go:130] ! I0420 01:35:08.534828       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0419 18:59:09.915364   14960 command_runner.go:130] ! I0420 01:35:08.534961       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0419 18:59:09.915364   14960 command_runner.go:130] ! I0420 01:35:08.674769       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0419 18:59:09.915483   14960 command_runner.go:130] ! I0420 01:35:08.675139       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0419 18:59:09.915483   14960 command_runner.go:130] ! I0420 01:35:08.675159       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0419 18:59:09.915483   14960 command_runner.go:130] ! I0420 01:35:08.825012       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0419 18:59:09.915483   14960 command_runner.go:130] ! I0420 01:35:08.825352       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0419 18:59:09.915595   14960 command_runner.go:130] ! I0420 01:35:08.825549       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0419 18:59:09.915595   14960 command_runner.go:130] ! I0420 01:35:09.067591       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0419 18:59:09.915595   14960 command_runner.go:130] ! I0420 01:35:09.068206       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0419 18:59:09.915654   14960 command_runner.go:130] ! I0420 01:35:09.068502       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:09.915654   14960 command_runner.go:130] ! I0420 01:35:09.068578       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0419 18:59:09.915654   14960 command_runner.go:130] ! I0420 01:35:09.320310       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0419 18:59:09.915654   14960 command_runner.go:130] ! I0420 01:35:09.320746       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0419 18:59:09.915654   14960 command_runner.go:130] ! I0420 01:35:09.321134       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0419 18:59:09.915654   14960 command_runner.go:130] ! I0420 01:35:09.516184       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0419 18:59:09.915654   14960 command_runner.go:130] ! I0420 01:35:09.516262       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0419 18:59:09.915654   14960 command_runner.go:130] ! I0420 01:35:09.691568       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0419 18:59:09.915774   14960 command_runner.go:130] ! I0420 01:35:09.693516       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0419 18:59:09.915774   14960 command_runner.go:130] ! I0420 01:35:09.693713       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0419 18:59:09.915774   14960 command_runner.go:130] ! I0420 01:35:09.694525       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0419 18:59:09.915774   14960 command_runner.go:130] ! I0420 01:35:09.933130       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0419 18:59:09.915774   14960 command_runner.go:130] ! I0420 01:35:09.933168       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0419 18:59:09.915774   14960 command_runner.go:130] ! I0420 01:35:09.936074       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0419 18:59:09.915774   14960 command_runner.go:130] ! I0420 01:35:10.217647       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0419 18:59:09.915866   14960 command_runner.go:130] ! I0420 01:35:10.218375       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0419 18:59:09.915866   14960 command_runner.go:130] ! I0420 01:35:10.218475       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0419 18:59:09.915907   14960 command_runner.go:130] ! I0420 01:35:10.267124       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0419 18:59:09.915907   14960 command_runner.go:130] ! I0420 01:35:10.267436       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0419 18:59:09.915907   14960 command_runner.go:130] ! I0420 01:35:10.267570       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0419 18:59:09.915907   14960 command_runner.go:130] ! I0420 01:35:10.268204       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0419 18:59:09.915907   14960 command_runner.go:130] ! I0420 01:35:10.268422       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0419 18:59:09.915907   14960 command_runner.go:130] ! E0420 01:35:10.316394       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0419 18:59:09.915907   14960 command_runner.go:130] ! I0420 01:35:10.316683       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0419 18:59:09.916006   14960 command_runner.go:130] ! I0420 01:35:10.472792       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0419 18:59:09.916006   14960 command_runner.go:130] ! I0420 01:35:10.472905       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0419 18:59:09.916006   14960 command_runner.go:130] ! I0420 01:35:10.472918       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0419 18:59:09.916006   14960 command_runner.go:130] ! I0420 01:35:10.624680       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0419 18:59:09.916006   14960 command_runner.go:130] ! I0420 01:35:10.624742       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0419 18:59:09.916006   14960 command_runner.go:130] ! I0420 01:35:10.624753       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0419 18:59:09.916006   14960 command_runner.go:130] ! I0420 01:35:10.772273       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0419 18:59:09.916122   14960 command_runner.go:130] ! I0420 01:35:10.772422       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0419 18:59:09.916122   14960 command_runner.go:130] ! I0420 01:35:10.773389       1 shared_informer.go:313] Waiting for caches to sync for job
	I0419 18:59:09.916122   14960 command_runner.go:130] ! I0420 01:35:10.922317       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0419 18:59:09.916122   14960 command_runner.go:130] ! I0420 01:35:10.922464       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0419 18:59:09.916122   14960 command_runner.go:130] ! I0420 01:35:10.922478       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0419 18:59:09.916122   14960 command_runner.go:130] ! I0420 01:35:11.070777       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0419 18:59:09.916122   14960 command_runner.go:130] ! I0420 01:35:11.071059       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0419 18:59:09.916122   14960 command_runner.go:130] ! I0420 01:35:11.071119       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0419 18:59:09.916253   14960 command_runner.go:130] ! I0420 01:35:11.071166       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0419 18:59:09.916253   14960 command_runner.go:130] ! I0420 01:35:11.071195       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0419 18:59:09.916253   14960 command_runner.go:130] ! I0420 01:35:11.071205       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0419 18:59:09.916253   14960 command_runner.go:130] ! I0420 01:35:11.222012       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0419 18:59:09.916253   14960 command_runner.go:130] ! I0420 01:35:11.222056       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0419 18:59:09.916253   14960 command_runner.go:130] ! I0420 01:35:11.222746       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0419 18:59:09.916253   14960 command_runner.go:130] ! I0420 01:35:11.372624       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0419 18:59:09.916361   14960 command_runner.go:130] ! I0420 01:35:11.372812       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0419 18:59:09.916361   14960 command_runner.go:130] ! I0420 01:35:11.372965       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0419 18:59:09.916361   14960 command_runner.go:130] ! I0420 01:35:11.522757       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0419 18:59:09.916361   14960 command_runner.go:130] ! I0420 01:35:11.522983       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0419 18:59:09.916361   14960 command_runner.go:130] ! I0420 01:35:11.523000       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0419 18:59:09.916361   14960 command_runner.go:130] ! I0420 01:35:11.671210       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0419 18:59:09.916516   14960 command_runner.go:130] ! I0420 01:35:11.671410       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0419 18:59:09.916516   14960 command_runner.go:130] ! I0420 01:35:11.671429       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0419 18:59:09.916516   14960 command_runner.go:130] ! I0420 01:35:11.820688       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0419 18:59:09.916516   14960 command_runner.go:130] ! I0420 01:35:11.821596       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0419 18:59:09.916516   14960 command_runner.go:130] ! I0420 01:35:11.821935       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0419 18:59:09.916516   14960 command_runner.go:130] ! E0420 01:35:11.971137       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0419 18:59:09.916516   14960 command_runner.go:130] ! I0420 01:35:11.971301       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0419 18:59:09.916637   14960 command_runner.go:130] ! I0420 01:35:11.971316       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0419 18:59:09.916637   14960 command_runner.go:130] ! I0420 01:35:11.971323       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0419 18:59:09.916637   14960 command_runner.go:130] ! I0420 01:35:12.121255       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0419 18:59:09.916682   14960 command_runner.go:130] ! I0420 01:35:12.121746       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0419 18:59:09.916682   14960 command_runner.go:130] ! I0420 01:35:12.121947       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0419 18:59:09.916682   14960 command_runner.go:130] ! I0420 01:35:12.274169       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0419 18:59:09.916682   14960 command_runner.go:130] ! I0420 01:35:12.274383       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0419 18:59:09.916682   14960 command_runner.go:130] ! I0420 01:35:12.274402       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0419 18:59:09.916740   14960 command_runner.go:130] ! I0420 01:35:12.318009       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0419 18:59:09.916740   14960 command_runner.go:130] ! I0420 01:35:12.318126       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0419 18:59:09.916740   14960 command_runner.go:130] ! I0420 01:35:12.318164       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:09.916806   14960 command_runner.go:130] ! I0420 01:35:12.318524       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0419 18:59:09.916806   14960 command_runner.go:130] ! I0420 01:35:12.318628       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0419 18:59:09.916806   14960 command_runner.go:130] ! I0420 01:35:12.318650       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:09.916865   14960 command_runner.go:130] ! I0420 01:35:12.319568       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0419 18:59:09.916865   14960 command_runner.go:130] ! I0420 01:35:12.319800       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:09.916865   14960 command_runner.go:130] ! I0420 01:35:12.319996       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0419 18:59:09.916939   14960 command_runner.go:130] ! I0420 01:35:12.320096       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0419 18:59:09.916939   14960 command_runner.go:130] ! I0420 01:35:12.320128       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0419 18:59:09.916939   14960 command_runner.go:130] ! I0420 01:35:12.320161       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:09.916939   14960 command_runner.go:130] ! I0420 01:35:12.320270       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:09.917004   14960 command_runner.go:130] ! I0420 01:35:22.381189       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0419 18:59:09.917004   14960 command_runner.go:130] ! I0420 01:35:22.381256       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0419 18:59:09.917004   14960 command_runner.go:130] ! I0420 01:35:22.381472       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0419 18:59:09.917004   14960 command_runner.go:130] ! I0420 01:35:22.381508       1 shared_informer.go:313] Waiting for caches to sync for node
	I0419 18:59:09.917069   14960 command_runner.go:130] ! I0420 01:35:22.395580       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0419 18:59:09.917069   14960 command_runner.go:130] ! I0420 01:35:22.395660       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0419 18:59:09.917126   14960 command_runner.go:130] ! I0420 01:35:22.396587       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0419 18:59:09.917126   14960 command_runner.go:130] ! I0420 01:35:22.396886       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0419 18:59:09.917126   14960 command_runner.go:130] ! I0420 01:35:22.405182       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:09.917126   14960 command_runner.go:130] ! I0420 01:35:22.428741       1 shared_informer.go:320] Caches are synced for service account
	I0419 18:59:09.917208   14960 command_runner.go:130] ! I0420 01:35:22.430037       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0419 18:59:09.917208   14960 command_runner.go:130] ! I0420 01:35:22.433041       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0419 18:59:09.917208   14960 command_runner.go:130] ! I0420 01:35:22.440027       1 shared_informer.go:320] Caches are synced for namespace
	I0419 18:59:09.917265   14960 command_runner.go:130] ! I0420 01:35:22.466474       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:09.917265   14960 command_runner.go:130] ! I0420 01:35:22.469554       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0419 18:59:09.917265   14960 command_runner.go:130] ! I0420 01:35:22.477923       1 shared_informer.go:320] Caches are synced for PV protection
	I0419 18:59:09.917265   14960 command_runner.go:130] ! I0420 01:35:22.479748       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0419 18:59:09.917265   14960 command_runner.go:130] ! I0420 01:35:22.479794       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0419 18:59:09.917327   14960 command_runner.go:130] ! I0420 01:35:22.480700       1 shared_informer.go:320] Caches are synced for PVC protection
	I0419 18:59:09.917327   14960 command_runner.go:130] ! I0420 01:35:22.492034       1 shared_informer.go:320] Caches are synced for expand
	I0419 18:59:09.917327   14960 command_runner.go:130] ! I0420 01:35:22.492084       1 shared_informer.go:320] Caches are synced for endpoint
	I0419 18:59:09.917327   14960 command_runner.go:130] ! I0420 01:35:22.492130       1 shared_informer.go:320] Caches are synced for job
	I0419 18:59:09.917383   14960 command_runner.go:130] ! I0420 01:35:22.497920       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0419 18:59:09.917383   14960 command_runner.go:130] ! I0420 01:35:22.498399       1 shared_informer.go:320] Caches are synced for node
	I0419 18:59:09.917435   14960 command_runner.go:130] ! I0420 01:35:22.498473       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0419 18:59:09.917435   14960 command_runner.go:130] ! I0420 01:35:22.498515       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0419 18:59:09.917435   14960 command_runner.go:130] ! I0420 01:35:22.498526       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0419 18:59:09.917475   14960 command_runner.go:130] ! I0420 01:35:22.498531       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0419 18:59:09.917475   14960 command_runner.go:130] ! I0420 01:35:22.508187       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000\" does not exist"
	I0419 18:59:09.917520   14960 command_runner.go:130] ! I0420 01:35:22.508396       1 shared_informer.go:320] Caches are synced for GC
	I0419 18:59:09.917520   14960 command_runner.go:130] ! I0420 01:35:22.512585       1 shared_informer.go:320] Caches are synced for crt configmap
	I0419 18:59:09.917520   14960 command_runner.go:130] ! I0420 01:35:22.520820       1 shared_informer.go:320] Caches are synced for daemon sets
	I0419 18:59:09.917520   14960 command_runner.go:130] ! I0420 01:35:22.521073       1 shared_informer.go:320] Caches are synced for stateful set
	I0419 18:59:09.917585   14960 command_runner.go:130] ! I0420 01:35:22.521189       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0419 18:59:09.917585   14960 command_runner.go:130] ! I0420 01:35:22.521223       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0419 18:59:09.917585   14960 command_runner.go:130] ! I0420 01:35:22.521268       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0419 18:59:09.917585   14960 command_runner.go:130] ! I0420 01:35:22.527709       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0419 18:59:09.917648   14960 command_runner.go:130] ! I0420 01:35:22.528722       1 shared_informer.go:320] Caches are synced for cronjob
	I0419 18:59:09.917648   14960 command_runner.go:130] ! I0420 01:35:22.528751       1 shared_informer.go:320] Caches are synced for ephemeral
	I0419 18:59:09.917648   14960 command_runner.go:130] ! I0420 01:35:22.528767       1 shared_informer.go:320] Caches are synced for TTL
	I0419 18:59:09.917648   14960 command_runner.go:130] ! I0420 01:35:22.529370       1 shared_informer.go:320] Caches are synced for HPA
	I0419 18:59:09.917706   14960 command_runner.go:130] ! I0420 01:35:22.529414       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0419 18:59:09.917706   14960 command_runner.go:130] ! I0420 01:35:22.529477       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:09.917706   14960 command_runner.go:130] ! I0420 01:35:22.529509       1 shared_informer.go:320] Caches are synced for persistent volume
	I0419 18:59:09.917706   14960 command_runner.go:130] ! I0420 01:35:22.552273       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000" podCIDRs=["10.244.0.0/24"]
	I0419 18:59:09.917768   14960 command_runner.go:130] ! I0420 01:35:22.569198       1 shared_informer.go:320] Caches are synced for taint
	I0419 18:59:09.917768   14960 command_runner.go:130] ! I0420 01:35:22.569287       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0419 18:59:09.917768   14960 command_runner.go:130] ! I0420 01:35:22.569354       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000"
	I0419 18:59:09.917828   14960 command_runner.go:130] ! I0420 01:35:22.569429       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0419 18:59:09.917828   14960 command_runner.go:130] ! I0420 01:35:22.574991       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0419 18:59:09.917828   14960 command_runner.go:130] ! I0420 01:35:22.590559       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0419 18:59:09.917828   14960 command_runner.go:130] ! I0420 01:35:22.623057       1 shared_informer.go:320] Caches are synced for deployment
	I0419 18:59:09.917888   14960 command_runner.go:130] ! I0420 01:35:22.623597       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0419 18:59:09.917888   14960 command_runner.go:130] ! I0420 01:35:22.651041       1 shared_informer.go:320] Caches are synced for disruption
	I0419 18:59:09.917888   14960 command_runner.go:130] ! I0420 01:35:22.699011       1 shared_informer.go:320] Caches are synced for attach detach
	I0419 18:59:09.917888   14960 command_runner.go:130] ! I0420 01:35:22.705303       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:09.917954   14960 command_runner.go:130] ! I0420 01:35:22.706815       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:09.917954   14960 command_runner.go:130] ! I0420 01:35:23.168892       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:09.917954   14960 command_runner.go:130] ! I0420 01:35:23.169115       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0419 18:59:09.917954   14960 command_runner.go:130] ! I0420 01:35:23.179171       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:09.917954   14960 command_runner.go:130] ! I0420 01:35:23.263116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="374.4156ms"
	I0419 18:59:09.917954   14960 command_runner.go:130] ! I0420 01:35:23.291471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.172623ms"
	I0419 18:59:09.918034   14960 command_runner.go:130] ! I0420 01:35:23.291547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.106µs"
	I0419 18:59:09.918034   14960 command_runner.go:130] ! I0420 01:35:23.578182       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="73.803114ms"
	I0419 18:59:09.918106   14960 command_runner.go:130] ! I0420 01:35:23.630233       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.666311ms"
	I0419 18:59:09.918106   14960 command_runner.go:130] ! I0420 01:35:23.630467       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="183.125µs"
	I0419 18:59:09.918106   14960 command_runner.go:130] ! I0420 01:35:36.906373       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="291.116µs"
	I0419 18:59:09.918106   14960 command_runner.go:130] ! I0420 01:35:36.934151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="76.104µs"
	I0419 18:59:09.918185   14960 command_runner.go:130] ! I0420 01:35:37.573034       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0419 18:59:09.918223   14960 command_runner.go:130] ! I0420 01:35:39.217159       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.488µs"
	I0419 18:59:09.918223   14960 command_runner.go:130] ! I0420 01:35:39.265403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.862669ms"
	I0419 18:59:09.918223   14960 command_runner.go:130] ! I0420 01:35:39.266023       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="552.786µs"
	I0419 18:59:09.918265   14960 command_runner.go:130] ! I0420 01:38:18.575680       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m02\" does not exist"
	I0419 18:59:09.918265   14960 command_runner.go:130] ! I0420 01:38:18.590900       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m02" podCIDRs=["10.244.1.0/24"]
	I0419 18:59:09.918326   14960 command_runner.go:130] ! I0420 01:38:22.613051       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m02"
	I0419 18:59:09.918326   14960 command_runner.go:130] ! I0420 01:38:37.669535       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.918326   14960 command_runner.go:130] ! I0420 01:39:03.031296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.090021ms"
	I0419 18:59:09.918385   14960 command_runner.go:130] ! I0420 01:39:03.053897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.363721ms"
	I0419 18:59:09.918385   14960 command_runner.go:130] ! I0420 01:39:03.054543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.499µs"
	I0419 18:59:09.918385   14960 command_runner.go:130] ! I0420 01:39:05.783927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.434034ms"
	I0419 18:59:09.918440   14960 command_runner.go:130] ! I0420 01:39:05.784276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="108.901µs"
	I0419 18:59:09.918440   14960 command_runner.go:130] ! I0420 01:39:07.103598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.163039ms"
	I0419 18:59:09.918440   14960 command_runner.go:130] ! I0420 01:39:07.104054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.4µs"
	I0419 18:59:09.918440   14960 command_runner.go:130] ! I0420 01:42:52.390190       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.918502   14960 command_runner.go:130] ! I0420 01:42:52.390530       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m03\" does not exist"
	I0419 18:59:09.918502   14960 command_runner.go:130] ! I0420 01:42:52.403944       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m03" podCIDRs=["10.244.2.0/24"]
	I0419 18:59:09.918565   14960 command_runner.go:130] ! I0420 01:42:52.676079       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m03"
	I0419 18:59:09.918565   14960 command_runner.go:130] ! I0420 01:43:11.211743       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.918565   14960 command_runner.go:130] ! I0420 01:50:42.818871       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.918683   14960 command_runner.go:130] ! I0420 01:53:22.621370       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.918683   14960 command_runner.go:130] ! I0420 01:53:28.752017       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m03\" does not exist"
	I0419 18:59:09.918747   14960 command_runner.go:130] ! I0420 01:53:28.753300       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.918747   14960 command_runner.go:130] ! I0420 01:53:28.789161       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m03" podCIDRs=["10.244.3.0/24"]
	I0419 18:59:09.918799   14960 command_runner.go:130] ! I0420 01:53:36.097701       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m03"
	I0419 18:59:09.918799   14960 command_runner.go:130] ! I0420 01:55:13.205537       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:09.942596   14960 logs.go:123] Gathering logs for Docker ...
	I0419 18:59:09.942596   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 18:59:09.976592   14960 command_runner.go:130] > Apr 20 01:56:27 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:09.976592   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:09.976694   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:09.976694   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:09.976793   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0419 18:59:09.976793   14960 command_runner.go:130] > Apr 20 01:56:28 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:09.976793   14960 command_runner.go:130] > Apr 20 01:56:28 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:09.976896   14960 command_runner.go:130] > Apr 20 01:56:28 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:09.976896   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0419 18:59:09.976896   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0419 18:59:09.977006   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:09.977006   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:09.977109   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:09.977109   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:09.977109   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0419 18:59:09.977211   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:09.977211   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:09.977317   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:09.977317   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0419 18:59:09.977317   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0419 18:59:09.977423   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:09.977423   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:09.977423   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:09.977523   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:09.977523   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0419 18:59:09.977622   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:09.977622   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:09.977622   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:09.977723   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0419 18:59:09.977723   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0419 18:59:09.977826   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0419 18:59:09.977883   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:09.977936   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:09.977936   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 systemd[1]: Starting Docker Application Container Engine...
	I0419 18:59:09.978016   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[657]: time="2024-04-20T01:57:18.710176447Z" level=info msg="Starting up"
	I0419 18:59:09.978064   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[657]: time="2024-04-20T01:57:18.711651787Z" level=info msg="containerd not running, starting managed containerd"
	I0419 18:59:09.978150   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[657]: time="2024-04-20T01:57:18.716746379Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=664
	I0419 18:59:09.978150   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.747165139Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0419 18:59:09.978253   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778478063Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0419 18:59:09.978253   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778645056Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0419 18:59:09.978253   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778743452Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0419 18:59:09.978356   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778860747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.978356   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.780842867Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:09.978458   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.780950062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.978574   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781281849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:09.978574   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781381945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.978674   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781405744Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0419 18:59:09.978674   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781418543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.978772   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781890324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.978772   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.782561296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.978873   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786065554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:09.978873   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786174049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.978972   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786324143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:09.978972   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786418639Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0419 18:59:09.979076   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.787110911Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0419 18:59:09.979076   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.787239206Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0419 18:59:09.979176   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.787257405Z" level=info msg="metadata content store policy set" policy=shared
	I0419 18:59:09.979176   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794203322Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0419 18:59:09.979272   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794271219Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0419 18:59:09.979272   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794292218Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0419 18:59:09.979272   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794308818Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0419 18:59:09.979375   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794325217Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0419 18:59:09.979375   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794399514Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0419 18:59:09.979473   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794805397Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0419 18:59:09.979473   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795021089Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0419 18:59:09.979572   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795123284Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0419 18:59:09.979572   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795209281Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0419 18:59:09.979671   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795227280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.979671   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795252079Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.979770   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795270178Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.979822   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795305177Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.979822   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795321176Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.979910   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795336476Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.979910   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795368674Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.980009   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795383074Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.980009   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795405873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980009   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795423972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980109   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795438172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980109   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795453671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980222   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795468970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980318   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795483970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980318   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795576866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980415   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795594465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980415   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795610465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980515   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795628364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980515   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795642863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980515   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795657163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980649   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795671762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980649   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795713760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0419 18:59:09.980774   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795756259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980774   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795811856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.980911   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795843255Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0419 18:59:09.980911   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795920052Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0419 18:59:09.981019   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795944151Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0419 18:59:09.981019   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796175542Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0419 18:59:09.981217   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796194141Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0419 18:59:09.981217   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796263238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.981330   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796305336Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0419 18:59:09.981330   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796319336Z" level=info msg="NRI interface is disabled by configuration."
	I0419 18:59:09.981330   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.797416591Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0419 18:59:09.981453   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.797499188Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0419 18:59:09.981453   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.797659381Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0419 18:59:09.981553   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.798178860Z" level=info msg="containerd successfully booted in 0.054054s"
	I0419 18:59:09.981553   14960 command_runner.go:130] > Apr 20 01:57:19 multinode-348000 dockerd[657]: time="2024-04-20T01:57:19.782299514Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0419 18:59:09.981656   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.015692930Z" level=info msg="Loading containers: start."
	I0419 18:59:09.981656   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.458486133Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0419 18:59:09.981794   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.551244732Z" level=info msg="Loading containers: done."
	I0419 18:59:09.981794   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.579065252Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	I0419 18:59:09.981794   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.579904847Z" level=info msg="Daemon has completed initialization"
	I0419 18:59:09.981898   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.637363974Z" level=info msg="API listen on [::]:2376"
	I0419 18:59:09.981898   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 systemd[1]: Started Docker Application Container Engine.
	I0419 18:59:09.982018   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.639403561Z" level=info msg="API listen on /var/run/docker.sock"
	I0419 18:59:09.982018   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.472939019Z" level=info msg="Processing signal 'terminated'"
	I0419 18:59:09.982018   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 systemd[1]: Stopping Docker Application Container Engine...
	I0419 18:59:09.982133   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.475778002Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0419 18:59:09.982133   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.476696029Z" level=info msg="Daemon shutdown complete"
	I0419 18:59:09.982237   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.476992338Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0419 18:59:09.982285   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.477157542Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0419 18:59:09.982320   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 systemd[1]: docker.service: Deactivated successfully.
	I0419 18:59:09.982320   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 systemd[1]: Stopped Docker Application Container Engine.
	I0419 18:59:09.982407   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 systemd[1]: Starting Docker Application Container Engine...
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:47.551071055Z" level=info msg="Starting up"
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:47.552229889Z" level=info msg="containerd not running, starting managed containerd"
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:47.555196776Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1058
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.593728507Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623742487Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623851391Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623939793Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623957394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624003795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624024296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624225802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624329205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624352205Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624363806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624391206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624622913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.627825907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.627876709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628096615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628227419Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628259620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628280321Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628292621Z" level=info msg="metadata content store policy set" policy=shared
	I0419 18:59:09.982452   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628514127Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0419 18:59:09.982991   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628716033Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0419 18:59:09.982991   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628764035Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0419 18:59:09.982991   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628783935Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0419 18:59:09.982991   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628872138Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0419 18:59:09.982991   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628938240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0419 18:59:09.983163   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.629513057Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.629754764Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.629936569Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630060973Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630086474Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630105074Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630122275Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630140375Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630157976Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630174076Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630191277Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630206077Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630234378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630252178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630267579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630283379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630298980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630314780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630328781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630360082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630377682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630410083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630423583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630455984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630487185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630505186Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0419 18:59:09.983209   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630528987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983747   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630643490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.983747   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630666391Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0419 18:59:09.983747   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630895497Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0419 18:59:09.983747   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630922398Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0419 18:59:09.983747   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630934798Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0419 18:59:09.983747   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630945799Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0419 18:59:09.983971   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.631020001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0419 18:59:09.984090   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.631067102Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0419 18:59:09.984163   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.631083303Z" level=info msg="NRI interface is disabled by configuration."
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632230736Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632319639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632396541Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632594347Z" level=info msg="containerd successfully booted in 0.042627s"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:48 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:48.604760074Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:48 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:48.637031921Z" level=info msg="Loading containers: start."
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:48 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:48.936729515Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.021589305Z" level=info msg="Loading containers: done."
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.048182786Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.048316590Z" level=info msg="Daemon has completed initialization"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.095567976Z" level=info msg="API listen on /var/run/docker.sock"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 systemd[1]: Started Docker Application Container Engine.
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.098304756Z" level=info msg="API listen on [::]:2376"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Loaded network plugin cni"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0419 18:59:09.984208   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Start cri-dockerd grpc backend"
	I0419 18:59:09.984744   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0419 18:59:09.984744   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-xnz2k_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"476e3efb38684054cbc21c027cf1ddd3f9ca47bb829786f8636fd877fd4b2f81\""
	I0419 18:59:09.984744   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-7w477_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"2dd294415aae178d6b9bed0368d49bedc6d0afa8f5b9ad0011c73ffcb2c24b3c\""
	I0419 18:59:09.985013   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.930297132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.985069   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.930785146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.985241   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.930860749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985300   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.931659072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985349   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002064338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.985401   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002134840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.985497   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002149541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985544   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002292345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985599   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e8baa597c1467ae8c3a1ce9abf0a378ddcffed5a93f7b41dddb4ce4511320dfd/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:09.985650   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151299517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151377019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151407720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151504323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169004837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169190142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169211543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169324146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/118cca57d1f547838d0c2442f2945e9daf9b041170bf162489525286bf3d75c2/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7052a6f04def38545970026f2934eb29913066396b26eb86f6675e7c0c685db/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ab9ff1d9068805d6a2ad10084128436e5b1fcaaa8c64f2f1a5e811455f0f99ee/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441120322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441388229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441493933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441783141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.541538868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.541743874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.541768275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.542244089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.635958239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.985779   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.636305549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.986319   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.636479754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.986319   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.636776363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.986319   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.703176711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.986319   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.703241613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.986319   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.703253713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.986555   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.704949863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.986621   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:00Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0419 18:59:09.986672   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.682944236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.986730   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.683066839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.986781   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.683087340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.986835   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.683203743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.986887   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.775229244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.986944   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.775527153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.987046   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.775671457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.987099   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.776004967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.987152   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.791300015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.987202   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.791478721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.987304   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.791611925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.987359   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.792335946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.987411   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/09f65a695303814b61d199dd53caa1efad532c76b04176a404206b865fd6b38a/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:09.987465   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5472c1fba3929b8a427273be545db7fb7df3c0ffbf035e24a1d3b71418b9e031/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:09.987572   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.150688061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.987622   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.150834665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.987678   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.151084573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.987744   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.152395011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.987796   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.341191051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.987912   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.341388457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.987976   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.341505460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.988048   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.342279283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.988114   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b5a777eba295e3b640d8d8a60aedcc20243d0f4a6fc4d3f3391b06fc6de0247a/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:09.988263   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.851490425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.988321   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.852225247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.988382   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.852338750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.988490   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.853459583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.988541   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1052]: time="2024-04-20T01:58:23.324898945Z" level=info msg="ignoring event" container=f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0419 18:59:09.988594   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:23.325982179Z" level=info msg="shim disconnected" id=f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919 namespace=moby
	I0419 18:59:09.988697   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:23.326071582Z" level=warning msg="cleaning up after shim disconnected" id=f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919 namespace=moby
	I0419 18:59:09.988751   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:23.326085983Z" level=info msg="cleaning up dead shim" namespace=moby
	I0419 18:59:09.988806   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1052]: time="2024-04-20T01:58:32.676558128Z" level=info msg="ignoring event" container=45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0419 18:59:09.988867   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:32.681127769Z" level=info msg="shim disconnected" id=45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702 namespace=moby
	I0419 18:59:09.988936   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:32.681255073Z" level=warning msg="cleaning up after shim disconnected" id=45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702 namespace=moby
	I0419 18:59:09.989006   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:32.681323075Z" level=info msg="cleaning up dead shim" namespace=moby
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356286643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356444648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356547351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356850260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.371313874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.372274603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.372497010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.373020725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.468874089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.469011493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.469033394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.469948221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.577907307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.578194516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.578360121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.578991939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989048   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:59:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f28a1e746a9b438367a8e05d2e1a085afb4abec4174f7a7eb80549e02b95047a/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:09.989634   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:59:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/75ff9f4e9dde29a997e4321dd3659a2ce7d479a75826a78c4d3525f1eb5f696f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.046055457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.046333943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.046360842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.047301594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.170326341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.170444835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.170467134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.171235195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:09.989727   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:12.546276   14960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 18:59:12.575530   14960 command_runner.go:130] > 1877
	I0419 18:59:12.575637   14960 api_server.go:72] duration metric: took 1m6.9907902s to wait for apiserver process to appear ...
	I0419 18:59:12.575637   14960 api_server.go:88] waiting for apiserver healthz status ...
	I0419 18:59:12.586822   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 18:59:12.612864   14960 command_runner.go:130] > bd3aa93bac25
	I0419 18:59:12.612954   14960 logs.go:276] 1 containers: [bd3aa93bac25]
	I0419 18:59:12.625411   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 18:59:12.655543   14960 command_runner.go:130] > 2deabe4dbdf4
	I0419 18:59:12.656099   14960 logs.go:276] 1 containers: [2deabe4dbdf4]
	I0419 18:59:12.666517   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 18:59:12.693989   14960 command_runner.go:130] > 352cf21a3e20
	I0419 18:59:12.694081   14960 command_runner.go:130] > 627b84abf45c
	I0419 18:59:12.694081   14960 logs.go:276] 2 containers: [352cf21a3e20 627b84abf45c]
	I0419 18:59:12.705809   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 18:59:12.736207   14960 command_runner.go:130] > d57aee391c14
	I0419 18:59:12.736266   14960 command_runner.go:130] > e476774b8f77
	I0419 18:59:12.736266   14960 logs.go:276] 2 containers: [d57aee391c14 e476774b8f77]
	I0419 18:59:12.747925   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 18:59:12.773815   14960 command_runner.go:130] > e438af0f1ec9
	I0419 18:59:12.773815   14960 command_runner.go:130] > a6586791413d
	I0419 18:59:12.775175   14960 logs.go:276] 2 containers: [e438af0f1ec9 a6586791413d]
	I0419 18:59:12.786498   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 18:59:12.826401   14960 command_runner.go:130] > b67f2295d26c
	I0419 18:59:12.826452   14960 command_runner.go:130] > 9638ddcd5428
	I0419 18:59:12.826483   14960 logs.go:276] 2 containers: [b67f2295d26c 9638ddcd5428]
	I0419 18:59:12.836351   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 18:59:12.867729   14960 command_runner.go:130] > ae0b21715f86
	I0419 18:59:12.868779   14960 command_runner.go:130] > f8c798c99407
	I0419 18:59:12.868824   14960 logs.go:276] 2 containers: [ae0b21715f86 f8c798c99407]
	I0419 18:59:12.868875   14960 logs.go:123] Gathering logs for kubelet ...
	I0419 18:59:12.868875   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 18:59:12.901971   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0419 18:59:12.902423   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: I0420 01:57:51.575772    1390 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0419 18:59:12.902464   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: I0420 01:57:51.576306    1390 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:12.902464   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: I0420 01:57:51.577194    1390 server.go:927] "Client rotation is on, will bootstrap in background"
	I0419 18:59:12.902500   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: E0420 01:57:51.579651    1390 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: I0420 01:57:52.300689    1443 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: I0420 01:57:52.301056    1443 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: I0420 01:57:52.301551    1443 server.go:927] "Client rotation is on, will bootstrap in background"
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: E0420 01:57:52.301845    1443 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.955182    1526 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.955367    1526 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.955676    1526 server.go:927] "Client rotation is on, will bootstrap in background"
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.957661    1526 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.971626    1526 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.998144    1526 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.998312    1526 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.999775    1526 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0419 18:59:12.902529   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:54.999948    1526 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-348000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0419 18:59:12.903076   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.000770    1526 topology_manager.go:138] "Creating topology manager with none policy"
	I0419 18:59:12.903076   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.000879    1526 container_manager_linux.go:301] "Creating device plugin manager"
	I0419 18:59:12.903076   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.001855    1526 state_mem.go:36] "Initialized new in-memory state store"
	I0419 18:59:12.903076   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.003861    1526 kubelet.go:400] "Attempting to sync node with API server"
	I0419 18:59:12.903129   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.003952    1526 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0419 18:59:12.903129   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.004045    1526 kubelet.go:312] "Adding apiserver pod source"
	I0419 18:59:12.903129   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.009472    1526 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0419 18:59:12.903129   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.017989    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.903216   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.018091    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.903304   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.019381    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.903304   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.019428    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.903344   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.019619    1526 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.1" apiVersion="v1"
	I0419 18:59:12.903344   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.022328    1526 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0419 18:59:12.903344   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.023051    1526 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.025680    1526 server.go:1264] "Started kubelet"
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.028955    1526 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.031361    1526 server.go:455] "Adding debug handlers to kubelet server"
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.034499    1526 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.035670    1526 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.036524    1526 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.19.42.24:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-348000.17c7da5cb9bb1787  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-348000,UID:multinode-348000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-348000,},FirstTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,LastTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-348
000,}"
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.053292    1526 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.062175    1526 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.067879    1526 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.097159    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="200ms"
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.116285    1526 factory.go:221] Registration of the systemd container factory successfully
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.117073    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.903411   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.118285    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.117970    1526 reconciler.go:26] "Reconciler: start to sync state"
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.118962    1526 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.119576    1526 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.135081    1526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.165861    1526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166700    1526 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166759    1526 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166846    1526 state_mem.go:36] "Initialized new in-memory state store"
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166997    1526 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168395    1526 kubelet.go:2337] "Starting kubelet main sync loop"
	I0419 18:59:12.905346   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.168500    1526 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168338    1526 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168585    1526 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168613    1526 policy_none.go:49] "None policy: Start"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.167637    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.171087    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.172453    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.172557    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.187830    1526 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.187946    1526 state_mem.go:35] "Initializing new in-memory state store"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.189368    1526 state_mem.go:75] "Updated machine memory state"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.195268    1526 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.195483    1526 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.197626    1526 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.198638    1526 iptables.go:577] "Could not set up iptables canary" err=<
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.201551    1526 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-348000\" not found"
	I0419 18:59:12.905948   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.269451    1526 topology_manager.go:215] "Topology Admit Handler" podUID="30aa2729d0c65b9f89e1ae2d151edd9b" podNamespace="kube-system" podName="kube-controller-manager-multinode-348000"
	I0419 18:59:12.906486   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.271913    1526 topology_manager.go:215] "Topology Admit Handler" podUID="92813b2aed63b63058d3fd06709fa24e" podNamespace="kube-system" podName="kube-scheduler-multinode-348000"
	I0419 18:59:12.906486   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.273779    1526 topology_manager.go:215] "Topology Admit Handler" podUID="af7a3c9321ace7e2a933260472b90113" podNamespace="kube-system" podName="kube-apiserver-multinode-348000"
	I0419 18:59:12.906539   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.275662    1526 topology_manager.go:215] "Topology Admit Handler" podUID="c0cfa3da6a3913c3e67500f6c3e9d72b" podNamespace="kube-system" podName="etcd-multinode-348000"
	I0419 18:59:12.906539   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.281258    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="476e3efb38684054cbc21c027cf1ddd3f9ca47bb829786f8636fd877fd4b2f81"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.281433    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dd294415aae178d6b9bed0368d49bedc6d0afa8f5b9ad0011c73ffcb2c24b3c"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.281454    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5d733991bf1a9e82ffd10768e0652c6c3f983ab24307142345cab3358f068bc"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.297657    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd9e5fae3950c99e6cc71d6166919d407b00212c93827d74e5b83f3896925c0a"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.310354    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="400ms"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.316552    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="187cb57784f4ebcba88e5bf725c118a7d2beec4f543d3864e8f389573f0b11f9"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.332421    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e420625b84be10aa87409a43f4296165b33ed76e82c3ba8a9214abd7177bd38"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.356050    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00d48e11227effb5f0316d58c24e374b4b3f9dcd1b98ac51d6b0038a72d47e42"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.372330    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.373779    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.376042    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da1d06ec238f43c7ad43cae75e142a6d15b9c8fb69f88ad8079f167f3f3a6fd4"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.392858    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7935893e9f22a54393d2b3d0a644f7c11a848d5604938074232342a8602e239f"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423082    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-ca-certs\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423312    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-flexvolume-dir\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423400    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-k8s-certs\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423427    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-kubeconfig\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423456    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af7a3c9321ace7e2a933260472b90113-ca-certs\") pod \"kube-apiserver-multinode-348000\" (UID: \"af7a3c9321ace7e2a933260472b90113\") " pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:12.906582   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423489    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/c0cfa3da6a3913c3e67500f6c3e9d72b-etcd-data\") pod \"etcd-multinode-348000\" (UID: \"c0cfa3da6a3913c3e67500f6c3e9d72b\") " pod="kube-system/etcd-multinode-348000"
	I0419 18:59:12.907126   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423525    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:12.907126   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423552    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92813b2aed63b63058d3fd06709fa24e-kubeconfig\") pod \"kube-scheduler-multinode-348000\" (UID: \"92813b2aed63b63058d3fd06709fa24e\") " pod="kube-system/kube-scheduler-multinode-348000"
	I0419 18:59:12.907126   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423669    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af7a3c9321ace7e2a933260472b90113-k8s-certs\") pod \"kube-apiserver-multinode-348000\" (UID: \"af7a3c9321ace7e2a933260472b90113\") " pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:12.907221   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423703    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af7a3c9321ace7e2a933260472b90113-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-348000\" (UID: \"af7a3c9321ace7e2a933260472b90113\") " pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:12.907221   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423739    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/c0cfa3da6a3913c3e67500f6c3e9d72b-etcd-certs\") pod \"etcd-multinode-348000\" (UID: \"c0cfa3da6a3913c3e67500f6c3e9d72b\") " pod="kube-system/etcd-multinode-348000"
	I0419 18:59:12.907323   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.518144    1526 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.19.42.24:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-348000.17c7da5cb9bb1787  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-348000,UID:multinode-348000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-348000,},FirstTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,LastTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-348
000,}"
	I0419 18:59:12.907377   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.713067    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="800ms"
	I0419 18:59:12.907377   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.777032    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:12.907414   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.778597    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:12.907439   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.832721    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.907474   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.832971    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.907512   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: W0420 01:57:56.061439    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.907580   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.063005    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.907604   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: W0420 01:57:56.073517    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.073647    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: W0420 01:57:56.303763    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.303918    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.515345    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="1.6s"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: I0420 01:57:56.583532    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.584646    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:57:58 multinode-348000 kubelet[1526]: I0420 01:57:58.185924    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.850138    1526 kubelet_node_status.go:112] "Node was previously registered" node="multinode-348000"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.850459    1526 kubelet_node_status.go:76] "Successfully registered node" node="multinode-348000"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.852895    1526 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.854574    1526 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.855598    1526 setters.go:580] "Node became not ready" node="multinode-348000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-04-20T01:58:00Z","lastTransitionTime":"2024-04-20T01:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.022496    1526 apiserver.go:52] "Watching apiserver"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.028549    1526 topology_manager.go:215] "Topology Admit Handler" podUID="274342c4-c21f-4279-b0ea-743d8e2c1463" podNamespace="kube-system" podName="kube-proxy-kj76x"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.028950    1526 topology_manager.go:215] "Topology Admit Handler" podUID="46c91d5e-edfa-4254-a802-148047caeab5" podNamespace="kube-system" podName="kindnet-s4fsr"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.029150    1526 topology_manager.go:215] "Topology Admit Handler" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7w477"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.029359    1526 topology_manager.go:215] "Topology Admit Handler" podUID="ffa0cfb9-91fb-4d5b-abe7-11992c731b74" podNamespace="kube-system" podName="storage-provisioner"
	I0419 18:59:12.907632   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.029596    1526 topology_manager.go:215] "Topology Admit Handler" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916" podNamespace="default" podName="busybox-fc5497c4f-xnz2k"
	I0419 18:59:12.908169   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.030004    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.908169   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.030339    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-348000" podUID="af4afa87-c484-4b73-9a4d-e86ddcd90049"
	I0419 18:59:12.908234   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.031127    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-348000" podUID="18f5e677-6a96-47ee-9f61-60ab9445eb92"
	I0419 18:59:12.908234   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.036486    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.908234   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.078433    1526 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-348000"
	I0419 18:59:12.908234   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.080072    1526 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.080948    1526 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.155980    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/274342c4-c21f-4279-b0ea-743d8e2c1463-xtables-lock\") pod \"kube-proxy-kj76x\" (UID: \"274342c4-c21f-4279-b0ea-743d8e2c1463\") " pod="kube-system/kube-proxy-kj76x"
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.156217    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/274342c4-c21f-4279-b0ea-743d8e2c1463-lib-modules\") pod \"kube-proxy-kj76x\" (UID: \"274342c4-c21f-4279-b0ea-743d8e2c1463\") " pod="kube-system/kube-proxy-kj76x"
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157104    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/46c91d5e-edfa-4254-a802-148047caeab5-cni-cfg\") pod \"kindnet-s4fsr\" (UID: \"46c91d5e-edfa-4254-a802-148047caeab5\") " pod="kube-system/kindnet-s4fsr"
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157248    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46c91d5e-edfa-4254-a802-148047caeab5-xtables-lock\") pod \"kindnet-s4fsr\" (UID: \"46c91d5e-edfa-4254-a802-148047caeab5\") " pod="kube-system/kindnet-s4fsr"
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.157178    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.157539    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:01.657504317 +0000 UTC m=+6.817666984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157392    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ffa0cfb9-91fb-4d5b-abe7-11992c731b74-tmp\") pod \"storage-provisioner\" (UID: \"ffa0cfb9-91fb-4d5b-abe7-11992c731b74\") " pod="kube-system/storage-provisioner"
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157844    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46c91d5e-edfa-4254-a802-148047caeab5-lib-modules\") pod \"kindnet-s4fsr\" (UID: \"46c91d5e-edfa-4254-a802-148047caeab5\") " pod="kube-system/kindnet-s4fsr"
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.176143    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89aa15d5f8e328791151d96100a36918" path="/var/lib/kubelet/pods/89aa15d5f8e328791151d96100a36918/volumes"
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.179130    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fef0b92f87f018a58c19217fdf5d6e1" path="/var/lib/kubelet/pods/8fef0b92f87f018a58c19217fdf5d6e1/volumes"
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.206903    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.207139    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.207264    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:01.707244177 +0000 UTC m=+6.867406744 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.908324   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.241569    1526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-348000" podStartSLOduration=0.241545984 podStartE2EDuration="241.545984ms" podCreationTimestamp="2024-04-20 01:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-20 01:58:01.218870918 +0000 UTC m=+6.379033485" watchObservedRunningTime="2024-04-20 01:58:01.241545984 +0000 UTC m=+6.401708551"
	I0419 18:59:12.908848   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.287607    1526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-348000" podStartSLOduration=0.287584435 podStartE2EDuration="287.584435ms" podCreationTimestamp="2024-04-20 01:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-20 01:58:01.265671392 +0000 UTC m=+6.425834059" watchObservedRunningTime="2024-04-20 01:58:01.287584435 +0000 UTC m=+6.447747102"
	I0419 18:59:12.908848   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.663973    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:12.908889   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.664077    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:02.664058382 +0000 UTC m=+7.824220949 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:12.909015   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.764474    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.764518    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.764584    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:02.764566131 +0000 UTC m=+7.924728698 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: I0420 01:58:02.563904    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5a777eba295e3b640d8d8a60aedcc20243d0f4a6fc4d3f3391b06fc6de0247a"
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.564077    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: I0420 01:58:02.565075    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-348000" podUID="af4afa87-c484-4b73-9a4d-e86ddcd90049"
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.679358    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.679588    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:04.67956768 +0000 UTC m=+9.839730247 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.789713    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.791860    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.792206    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:04.792183185 +0000 UTC m=+9.952345752 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:03 multinode-348000 kubelet[1526]: E0420 01:58:03.170851    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.169519    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.700421    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.700676    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:08.700644486 +0000 UTC m=+13.860807053 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:12.909043   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.801637    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909565   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.801751    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909607   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.801874    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:08.801835856 +0000 UTC m=+13.961998423 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909607   14960 command_runner.go:130] > Apr 20 01:58:05 multinode-348000 kubelet[1526]: E0420 01:58:05.169947    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.909607   14960 command_runner.go:130] > Apr 20 01:58:06 multinode-348000 kubelet[1526]: E0420 01:58:06.169499    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.909743   14960 command_runner.go:130] > Apr 20 01:58:07 multinode-348000 kubelet[1526]: E0420 01:58:07.170147    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.909795   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.169208    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.909795   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.751778    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.752347    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:16.752328447 +0000 UTC m=+21.912491114 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.852291    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.852347    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.852455    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:16.852435774 +0000 UTC m=+22.012598341 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:09 multinode-348000 kubelet[1526]: E0420 01:58:09.169017    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:10 multinode-348000 kubelet[1526]: E0420 01:58:10.169399    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:11 multinode-348000 kubelet[1526]: E0420 01:58:11.169467    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:12 multinode-348000 kubelet[1526]: E0420 01:58:12.169441    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:13 multinode-348000 kubelet[1526]: E0420 01:58:13.169983    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:14 multinode-348000 kubelet[1526]: E0420 01:58:14.169635    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:15 multinode-348000 kubelet[1526]: E0420 01:58:15.169488    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.169756    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.835157    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.835299    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:32.835279204 +0000 UTC m=+37.995441771 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:12.909884   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.936116    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.910426   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.936169    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.910476   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.936232    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:32.936212581 +0000 UTC m=+38.096375148 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.910476   14960 command_runner.go:130] > Apr 20 01:58:17 multinode-348000 kubelet[1526]: E0420 01:58:17.169160    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.910604   14960 command_runner.go:130] > Apr 20 01:58:18 multinode-348000 kubelet[1526]: E0420 01:58:18.171760    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.910604   14960 command_runner.go:130] > Apr 20 01:58:19 multinode-348000 kubelet[1526]: E0420 01:58:19.169723    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.910604   14960 command_runner.go:130] > Apr 20 01:58:20 multinode-348000 kubelet[1526]: E0420 01:58:20.169542    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.910693   14960 command_runner.go:130] > Apr 20 01:58:21 multinode-348000 kubelet[1526]: E0420 01:58:21.169675    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.910744   14960 command_runner.go:130] > Apr 20 01:58:22 multinode-348000 kubelet[1526]: E0420 01:58:22.169364    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.910744   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: E0420 01:58:23.169569    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.910744   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: I0420 01:58:23.960680    1526 scope.go:117] "RemoveContainer" containerID="8a37c65d06fabf8d836ffb9a511bb6df5b549fa37051ef79f1f839076af60512"
	I0419 18:59:12.910744   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: I0420 01:58:23.961154    1526 scope.go:117] "RemoveContainer" containerID="f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919"
	I0419 18:59:12.910837   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: E0420 01:58:23.961603    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kindnet-cni pod=kindnet-s4fsr_kube-system(46c91d5e-edfa-4254-a802-148047caeab5)\"" pod="kube-system/kindnet-s4fsr" podUID="46c91d5e-edfa-4254-a802-148047caeab5"
	I0419 18:59:12.910837   14960 command_runner.go:130] > Apr 20 01:58:24 multinode-348000 kubelet[1526]: E0420 01:58:24.169608    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.910837   14960 command_runner.go:130] > Apr 20 01:58:25 multinode-348000 kubelet[1526]: E0420 01:58:25.169976    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.910837   14960 command_runner.go:130] > Apr 20 01:58:26 multinode-348000 kubelet[1526]: E0420 01:58:26.169734    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:27 multinode-348000 kubelet[1526]: E0420 01:58:27.170054    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:28 multinode-348000 kubelet[1526]: E0420 01:58:28.169260    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:29 multinode-348000 kubelet[1526]: E0420 01:58:29.169306    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:30 multinode-348000 kubelet[1526]: E0420 01:58:30.169857    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:31 multinode-348000 kubelet[1526]: E0420 01:58:31.169543    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.169556    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.891318    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.891496    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:59:04.891477649 +0000 UTC m=+70.051640216 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.992269    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.992577    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.992723    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:59:04.992688767 +0000 UTC m=+70.152851434 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:12.911462   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: I0420 01:58:33.115355    1526 scope.go:117] "RemoveContainer" containerID="e248c230a4aa379bf469f41a95d1ea2033316d322a10b6da0ae06f656334b936"
	I0419 18:59:12.912019   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: I0420 01:58:33.115897    1526 scope.go:117] "RemoveContainer" containerID="45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702"
	I0419 18:59:12.912019   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: E0420 01:58:33.116183    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ffa0cfb9-91fb-4d5b-abe7-11992c731b74)\"" pod="kube-system/storage-provisioner" podUID="ffa0cfb9-91fb-4d5b-abe7-11992c731b74"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: E0420 01:58:33.169303    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:34 multinode-348000 kubelet[1526]: E0420 01:58:34.169175    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:35 multinode-348000 kubelet[1526]: E0420 01:58:35.169508    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 kubelet[1526]: E0420 01:58:36.169960    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 kubelet[1526]: I0420 01:58:36.170769    1526 scope.go:117] "RemoveContainer" containerID="f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:37 multinode-348000 kubelet[1526]: E0420 01:58:37.171433    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:38 multinode-348000 kubelet[1526]: E0420 01:58:38.169747    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:39 multinode-348000 kubelet[1526]: E0420 01:58:39.169252    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:40 multinode-348000 kubelet[1526]: E0420 01:58:40.169368    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:40 multinode-348000 kubelet[1526]: I0420 01:58:40.269590    1526 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 kubelet[1526]: I0420 01:58:45.169759    1526 scope.go:117] "RemoveContainer" containerID="45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]: I0420 01:58:55.162183    1526 scope.go:117] "RemoveContainer" containerID="490377504e57c3189163833390967e79bb80d222691d4402677feb6f25ed22f4"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]: I0420 01:58:55.206283    1526 scope.go:117] "RemoveContainer" containerID="53f6a00490766be2eb687e6fff052ca7a46ae16a0baf4551e956c81550d673b2"
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]: E0420 01:58:55.212558    1526 iptables.go:577] "Could not set up iptables canary" err=<
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0419 18:59:12.912072   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0419 18:59:12.912614   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0419 18:59:12.912614   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 kubelet[1526]: I0420 01:59:05.918992    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75ff9f4e9dde29a997e4321dd3659a2ce7d479a75826a78c4d3525f1eb5f696f"
	I0419 18:59:12.912614   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 kubelet[1526]: I0420 01:59:05.948376    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f28a1e746a9b438367a8e05d2e1a085afb4abec4174f7a7eb80549e02b95047a"
	I0419 18:59:12.955048   14960 logs.go:123] Gathering logs for kube-apiserver [bd3aa93bac25] ...
	I0419 18:59:12.956069   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd3aa93bac25"
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:57.501840       1 options.go:221] external host was not specified, using 172.19.42.24
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:57.505380       1 server.go:148] Version: v1.30.0
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:57.505690       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:58.138487       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:58.138530       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:58.138987       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:58.139098       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:58.139890       1 instance.go:299] Using reconciler: lease
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.078678       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.078889       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.354874       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.355339       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.630985       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.818361       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.834974       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.835019       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.835028       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.835870       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.835981       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.837241       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.838781       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.838919       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.838930       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.841133       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.841240       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.842492       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.842627       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.842640       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.843439       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.843519       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.843649       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.844516       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.847031       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.847132       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.847143       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.847848       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.847881       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.847889       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.849069       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.849173       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.851437       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.851563       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.851574       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.852258       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.852357       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.852367       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.855318       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.855413       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.855499       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.857232       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.859073       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.859177       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! W0420 01:57:59.859187       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.985425   14960 command_runner.go:130] ! I0420 01:57:59.866540       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0419 18:59:12.986761   14960 command_runner.go:130] ! W0420 01:57:59.866633       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0419 18:59:12.986761   14960 command_runner.go:130] ! W0420 01:57:59.866643       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0419 18:59:12.986761   14960 command_runner.go:130] ! I0420 01:57:59.873672       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0419 18:59:12.986761   14960 command_runner.go:130] ! W0420 01:57:59.873814       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.986835   14960 command_runner.go:130] ! W0420 01:57:59.873827       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:12.986877   14960 command_runner.go:130] ! I0420 01:57:59.875959       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0419 18:59:12.986877   14960 command_runner.go:130] ! W0420 01:57:59.875999       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.986877   14960 command_runner.go:130] ! I0420 01:57:59.909243       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0419 18:59:12.986920   14960 command_runner.go:130] ! W0420 01:57:59.909284       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:12.986975   14960 command_runner.go:130] ! I0420 01:58:00.597195       1 secure_serving.go:213] Serving securely on [::]:8443
	I0419 18:59:12.986975   14960 command_runner.go:130] ! I0420 01:58:00.597666       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:12.987008   14960 command_runner.go:130] ! I0420 01:58:00.598134       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.597703       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.597737       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.600064       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.600948       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.601165       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.601445       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.602539       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.602852       1 aggregator.go:163] waiting for initial CRD sync...
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.603187       1 controller.go:78] Starting OpenAPI AggregationController
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.604023       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.604384       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.606631       1 available_controller.go:423] Starting AvailableConditionController
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.606857       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607138       1 controller.go:116] Starting legacy_token_tracking_controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607178       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607325       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607349       1 controller.go:139] Starting OpenAPI controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607381       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607407       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607409       1 naming_controller.go:291] Starting NamingConditionController
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607487       1 establishing_controller.go:76] Starting EstablishingController
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607512       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607530       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607546       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.608170       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.608198       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.608328       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.608421       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.607383       1 controller.go:87] Starting OpenAPI V3 controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.709605       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.736531       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.737086       1 shared_informer.go:320] Caches are synced for configmaps
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.737192       1 aggregator.go:165] initial CRD sync complete...
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.737219       1 autoregister_controller.go:141] Starting autoregister controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.737225       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0419 18:59:12.987057   14960 command_runner.go:130] ! I0420 01:58:00.737230       1 cache.go:39] Caches are synced for autoregister controller
	I0419 18:59:12.987664   14960 command_runner.go:130] ! I0420 01:58:00.740699       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 18:59:12.987744   14960 command_runner.go:130] ! I0420 01:58:00.741004       1 policy_source.go:224] refreshing policies
	I0419 18:59:12.987744   14960 command_runner.go:130] ! I0420 01:58:00.742672       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0419 18:59:12.987744   14960 command_runner.go:130] ! I0420 01:58:00.747054       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0419 18:59:12.987744   14960 command_runner.go:130] ! I0420 01:58:00.805770       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0419 18:59:12.987744   14960 command_runner.go:130] ! I0420 01:58:00.807460       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0419 18:59:12.987744   14960 command_runner.go:130] ! I0420 01:58:00.814456       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0419 18:59:12.987856   14960 command_runner.go:130] ! I0420 01:58:00.814490       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0419 18:59:12.987856   14960 command_runner.go:130] ! I0420 01:58:00.815844       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0419 18:59:12.987893   14960 command_runner.go:130] ! I0420 01:58:01.612010       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0419 18:59:12.987893   14960 command_runner.go:130] ! W0420 01:58:02.160618       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.42.231 172.19.42.24]
	I0419 18:59:12.987893   14960 command_runner.go:130] ! I0420 01:58:02.163332       1 controller.go:615] quota admission added evaluator for: endpoints
	I0419 18:59:12.987941   14960 command_runner.go:130] ! I0420 01:58:02.176968       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0419 18:59:12.987941   14960 command_runner.go:130] ! I0420 01:58:03.430204       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0419 18:59:12.987977   14960 command_runner.go:130] ! I0420 01:58:03.761410       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0419 18:59:12.987977   14960 command_runner.go:130] ! I0420 01:58:03.780335       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0419 18:59:12.987977   14960 command_runner.go:130] ! I0420 01:58:03.907022       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0419 18:59:12.988020   14960 command_runner.go:130] ! I0420 01:58:03.924019       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0419 18:59:12.988060   14960 command_runner.go:130] ! W0420 01:58:22.143512       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.42.24]
	I0419 18:59:12.996606   14960 logs.go:123] Gathering logs for kindnet [ae0b21715f86] ...
	I0419 18:59:12.996606   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0b21715f86"
	I0419 18:59:13.027805   14960 command_runner.go:130] ! I0420 01:58:36.715209       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0419 18:59:13.027904   14960 command_runner.go:130] ! I0420 01:58:36.715359       1 main.go:107] hostIP = 172.19.42.24
	I0419 18:59:13.027904   14960 command_runner.go:130] ! podIP = 172.19.42.24
	I0419 18:59:13.027904   14960 command_runner.go:130] ! I0420 01:58:36.715480       1 main.go:116] setting mtu 1500 for CNI 
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:36.715877       1 main.go:146] kindnetd IP family: "ipv4"
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:36.806023       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:37.413197       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:37.413291       1 main.go:227] handling current node
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:37.413685       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:37.413745       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:37.414005       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.19.32.249 Flags: [] Table: 0} 
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:37.506308       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:37.506405       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:37.506676       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.19.37.59 Flags: [] Table: 0} 
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:47.525508       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:47.525608       1 main.go:227] handling current node
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:47.525629       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:47.525638       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:47.526101       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:47.526135       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:57.538448       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:57.538834       1 main.go:227] handling current node
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:57.538899       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:13.027987   14960 command_runner.go:130] ! I0420 01:58:57.538926       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:13.028514   14960 command_runner.go:130] ! I0420 01:58:57.539176       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:13.028514   14960 command_runner.go:130] ! I0420 01:58:57.539274       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:13.028514   14960 command_runner.go:130] ! I0420 01:59:07.555783       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:13.028588   14960 command_runner.go:130] ! I0420 01:59:07.555932       1 main.go:227] handling current node
	I0419 18:59:13.028588   14960 command_runner.go:130] ! I0420 01:59:07.556426       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:13.028738   14960 command_runner.go:130] ! I0420 01:59:07.556438       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:13.028738   14960 command_runner.go:130] ! I0420 01:59:07.556563       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:13.028738   14960 command_runner.go:130] ! I0420 01:59:07.556590       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:13.037007   14960 logs.go:123] Gathering logs for coredns [352cf21a3e20] ...
	I0419 18:59:13.037007   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 352cf21a3e20"
	I0419 18:59:13.069693   14960 command_runner.go:130] > .:53
	I0419 18:59:13.070597   14960 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93714cfd58e203ac2baa48ea9c7b435951d2a9faed7a5c70b4e84c89c6c1fe4c1dfa41f14b3ebf0f5941dade673a82eaad960061e673dd78dcb856db3393b39d
	I0419 18:59:13.070636   14960 command_runner.go:130] > CoreDNS-1.11.1
	I0419 18:59:13.070636   14960 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0419 18:59:13.070636   14960 command_runner.go:130] > [INFO] 127.0.0.1:51206 - 14298 "HINFO IN 4972057462503628469.2167329557243878603. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028297062s
	I0419 18:59:13.076229   14960 logs.go:123] Gathering logs for kube-proxy [e438af0f1ec9] ...
	I0419 18:59:13.076229   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e438af0f1ec9"
	I0419 18:59:13.108366   14960 command_runner.go:130] ! I0420 01:58:03.129201       1 server_linux.go:69] "Using iptables proxy"
	I0419 18:59:13.108735   14960 command_runner.go:130] ! I0420 01:58:03.201631       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.42.24"]
	I0419 18:59:13.108780   14960 command_runner.go:130] ! I0420 01:58:03.344058       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 18:59:13.108780   14960 command_runner.go:130] ! I0420 01:58:03.344107       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 18:59:13.108811   14960 command_runner.go:130] ! I0420 01:58:03.344137       1 server_linux.go:165] "Using iptables Proxier"
	I0419 18:59:13.108860   14960 command_runner.go:130] ! I0420 01:58:03.353394       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 18:59:13.108898   14960 command_runner.go:130] ! I0420 01:58:03.354462       1 server.go:872] "Version info" version="v1.30.0"
	I0419 18:59:13.108898   14960 command_runner.go:130] ! I0420 01:58:03.354693       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:13.108940   14960 command_runner.go:130] ! I0420 01:58:03.358325       1 config.go:192] "Starting service config controller"
	I0419 18:59:13.108978   14960 command_runner.go:130] ! I0420 01:58:03.358366       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 18:59:13.108978   14960 command_runner.go:130] ! I0420 01:58:03.358985       1 config.go:101] "Starting endpoint slice config controller"
	I0419 18:59:13.108978   14960 command_runner.go:130] ! I0420 01:58:03.359176       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 18:59:13.108978   14960 command_runner.go:130] ! I0420 01:58:03.358997       1 config.go:319] "Starting node config controller"
	I0419 18:59:13.109020   14960 command_runner.go:130] ! I0420 01:58:03.368409       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 18:59:13.109020   14960 command_runner.go:130] ! I0420 01:58:03.459372       1 shared_informer.go:320] Caches are synced for service config
	I0419 18:59:13.109051   14960 command_runner.go:130] ! I0420 01:58:03.459745       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 18:59:13.109051   14960 command_runner.go:130] ! I0420 01:58:03.470538       1 shared_informer.go:320] Caches are synced for node config
	I0419 18:59:13.113308   14960 logs.go:123] Gathering logs for kube-proxy [a6586791413d] ...
	I0419 18:59:13.113308   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6586791413d"
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.120497       1 server_linux.go:69] "Using iptables proxy"
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.156956       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.42.231"]
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.208282       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.208472       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.208501       1 server_linux.go:165] "Using iptables Proxier"
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.214693       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.216114       1 server.go:872] "Version info" version="v1.30.0"
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.216181       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.219192       1 config.go:192] "Starting service config controller"
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.219810       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.220079       1 config.go:101] "Starting endpoint slice config controller"
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.220093       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.221802       1 config.go:319] "Starting node config controller"
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.221980       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.320313       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.320380       1 shared_informer.go:320] Caches are synced for service config
	I0419 18:59:13.146531   14960 command_runner.go:130] ! I0420 01:35:26.322323       1 shared_informer.go:320] Caches are synced for node config
	I0419 18:59:13.148587   14960 logs.go:123] Gathering logs for kube-controller-manager [9638ddcd5428] ...
	I0419 18:59:13.148587   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9638ddcd5428"
	I0419 18:59:13.191813   14960 command_runner.go:130] ! I0420 01:35:03.372734       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:13.192583   14960 command_runner.go:130] ! I0420 01:35:03.812267       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0419 18:59:13.192583   14960 command_runner.go:130] ! I0420 01:35:03.812307       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:13.192583   14960 command_runner.go:130] ! I0420 01:35:03.816347       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:13.192788   14960 command_runner.go:130] ! I0420 01:35:03.816460       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:13.192805   14960 command_runner.go:130] ! I0420 01:35:03.817145       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0419 18:59:13.192805   14960 command_runner.go:130] ! I0420 01:35:03.817250       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:13.192805   14960 command_runner.go:130] ! I0420 01:35:07.961997       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0419 18:59:13.192855   14960 command_runner.go:130] ! I0420 01:35:07.962027       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0419 18:59:13.192855   14960 command_runner.go:130] ! I0420 01:35:07.977942       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0419 18:59:13.192893   14960 command_runner.go:130] ! I0420 01:35:07.978602       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0419 18:59:13.192893   14960 command_runner.go:130] ! I0420 01:35:07.980093       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0419 18:59:13.192893   14960 command_runner.go:130] ! I0420 01:35:07.989698       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0419 18:59:13.193592   14960 command_runner.go:130] ! I0420 01:35:07.990033       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0419 18:59:13.193925   14960 command_runner.go:130] ! I0420 01:35:07.990321       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.005238       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.005791       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.006985       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.018816       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.019229       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.019480       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.046904       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.047815       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.049696       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.050007       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.062049       1 shared_informer.go:320] Caches are synced for tokens
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.065356       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.065873       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.113476       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.114130       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.116086       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.129157       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.129533       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.129568       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.165596       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.166223       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.166242       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.211668       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.211749       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.211766       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.232421       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.232496       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.232934       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.232991       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.502058       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.502113       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0419 18:59:13.194004   14960 command_runner.go:130] ! W0420 01:35:08.502140       1 shared_informer.go:597] resyncPeriod 21h44m16.388395173s is smaller than resyncCheckPeriod 22h35m59.940993284s and the informer has already started. Changing it to 22h35m59.940993284s
	I0419 18:59:13.194004   14960 command_runner.go:130] ! I0420 01:35:08.502208       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502278       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502298       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502314       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502330       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502351       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502407       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502437       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502458       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502479       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502501       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! W0420 01:35:08.502514       1 shared_informer.go:597] resyncPeriod 19h4m59.465157498s is smaller than resyncCheckPeriod 22h35m59.940993284s and the informer has already started. Changing it to 22h35m59.940993284s
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502638       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502666       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502684       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502713       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502732       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502771       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502793       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.502820       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.503928       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.503949       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.504053       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.534828       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.534961       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.674769       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.675139       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.675159       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.825012       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.825352       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:08.825549       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.067591       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.068206       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.068502       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.068578       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.320310       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.320746       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.321134       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.516184       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.516262       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.691568       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.693516       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.693713       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.694525       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.933130       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.933168       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:09.936074       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.217647       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.218375       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.218475       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.267124       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.267436       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.267570       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.268204       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.268422       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0419 18:59:13.194971   14960 command_runner.go:130] ! E0420 01:35:10.316394       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.316683       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.472792       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.472905       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0419 18:59:13.194971   14960 command_runner.go:130] ! I0420 01:35:10.472918       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:10.624680       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:10.624742       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:10.624753       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:10.772273       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:10.772422       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:10.773389       1 shared_informer.go:313] Waiting for caches to sync for job
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:10.922317       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:10.922464       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:10.922478       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.070777       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.071059       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.071119       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.071166       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.071195       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.071205       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.222012       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.222056       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.222746       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.372624       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.372812       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.372965       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.522757       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.522983       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.523000       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.671210       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.671410       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.671429       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.820688       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.821596       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.821935       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0419 18:59:13.195959   14960 command_runner.go:130] ! E0420 01:35:11.971137       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.971301       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.971316       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:11.971323       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.121255       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.121746       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.121947       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.274169       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.274383       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.274402       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.318009       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.318126       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.318164       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.318524       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.318628       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.318650       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.319568       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.319800       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.319996       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.320096       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.320128       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.320161       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:12.320270       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:22.381189       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:22.381256       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:22.381472       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:22.381508       1 shared_informer.go:313] Waiting for caches to sync for node
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:22.395580       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:22.395660       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:22.396587       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0419 18:59:13.195959   14960 command_runner.go:130] ! I0420 01:35:22.396886       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.405182       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.428741       1 shared_informer.go:320] Caches are synced for service account
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.430037       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.433041       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.440027       1 shared_informer.go:320] Caches are synced for namespace
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.466474       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.469554       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.477923       1 shared_informer.go:320] Caches are synced for PV protection
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.479748       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.479794       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.480700       1 shared_informer.go:320] Caches are synced for PVC protection
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.492034       1 shared_informer.go:320] Caches are synced for expand
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.492084       1 shared_informer.go:320] Caches are synced for endpoint
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.492130       1 shared_informer.go:320] Caches are synced for job
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.497920       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.498399       1 shared_informer.go:320] Caches are synced for node
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.498473       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.498515       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.498526       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.498531       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.508187       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000\" does not exist"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.508396       1 shared_informer.go:320] Caches are synced for GC
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.512585       1 shared_informer.go:320] Caches are synced for crt configmap
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.520820       1 shared_informer.go:320] Caches are synced for daemon sets
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.521073       1 shared_informer.go:320] Caches are synced for stateful set
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.521189       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.521223       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.521268       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.527709       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.528722       1 shared_informer.go:320] Caches are synced for cronjob
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.528751       1 shared_informer.go:320] Caches are synced for ephemeral
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.528767       1 shared_informer.go:320] Caches are synced for TTL
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.529370       1 shared_informer.go:320] Caches are synced for HPA
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.529414       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.529477       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.529509       1 shared_informer.go:320] Caches are synced for persistent volume
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.552273       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000" podCIDRs=["10.244.0.0/24"]
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.569198       1 shared_informer.go:320] Caches are synced for taint
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.569287       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.569354       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.569429       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.574991       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.590559       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.623057       1 shared_informer.go:320] Caches are synced for deployment
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.623597       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.651041       1 shared_informer.go:320] Caches are synced for disruption
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.699011       1 shared_informer.go:320] Caches are synced for attach detach
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.705303       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:22.706815       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:23.168892       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:23.169115       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:23.179171       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:23.263116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="374.4156ms"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:23.291471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.172623ms"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:23.291547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.106µs"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:23.578182       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="73.803114ms"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:23.630233       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.666311ms"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:23.630467       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="183.125µs"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:36.906373       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="291.116µs"
	I0419 18:59:13.197032   14960 command_runner.go:130] ! I0420 01:35:36.934151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="76.104µs"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:35:37.573034       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:35:39.217159       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.488µs"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:35:39.265403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.862669ms"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:35:39.266023       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="552.786µs"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:38:18.575680       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m02\" does not exist"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:38:18.590900       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m02" podCIDRs=["10.244.1.0/24"]
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:38:22.613051       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m02"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:38:37.669535       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:39:03.031296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.090021ms"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:39:03.053897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.363721ms"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:39:03.054543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.499µs"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:39:05.783927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.434034ms"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:39:05.784276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="108.901µs"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:39:07.103598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.163039ms"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:39:07.104054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.4µs"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:42:52.390190       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:42:52.390530       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m03\" does not exist"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:42:52.403944       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m03" podCIDRs=["10.244.2.0/24"]
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:42:52.676079       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m03"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:43:11.211743       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:50:42.818871       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:53:22.621370       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:53:28.752017       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m03\" does not exist"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:53:28.753300       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:53:28.789161       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m03" podCIDRs=["10.244.3.0/24"]
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:53:36.097701       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m03"
	I0419 18:59:13.198002   14960 command_runner.go:130] ! I0420 01:55:13.205537       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.217572   14960 logs.go:123] Gathering logs for Docker ...
	I0419 18:59:13.217572   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 18:59:13.251198   14960 command_runner.go:130] > Apr 20 01:56:27 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:13.251259   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:13.251259   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:13.251259   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:13.251313   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0419 18:59:13.251348   14960 command_runner.go:130] > Apr 20 01:56:28 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:28 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:28 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 systemd[1]: Starting Docker Application Container Engine...
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[657]: time="2024-04-20T01:57:18.710176447Z" level=info msg="Starting up"
	I0419 18:59:13.251376   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[657]: time="2024-04-20T01:57:18.711651787Z" level=info msg="containerd not running, starting managed containerd"
	I0419 18:59:13.251914   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[657]: time="2024-04-20T01:57:18.716746379Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=664
	I0419 18:59:13.251914   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.747165139Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0419 18:59:13.251963   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778478063Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0419 18:59:13.252020   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778645056Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0419 18:59:13.252058   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778743452Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0419 18:59:13.252058   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778860747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.252090   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.780842867Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:13.252145   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.780950062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781281849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781381945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781405744Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781418543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781890324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.782561296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786065554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786174049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786324143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786418639Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.787110911Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.787239206Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.787257405Z" level=info msg="metadata content store policy set" policy=shared
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794203322Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794271219Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794292218Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794308818Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794325217Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794399514Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794805397Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795021089Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795123284Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795209281Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795227280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.252187   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795252079Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.252776   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795270178Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.252876   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795305177Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.252876   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795321176Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.252876   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795336476Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.252876   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795368674Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.252876   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795383074Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.253037   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795405873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253037   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795423972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253037   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795438172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253098   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795453671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253098   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795468970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253137   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795483970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253137   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795576866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253186   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795594465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253186   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795610465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253224   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795628364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253265   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795642863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253265   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795657163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253305   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795671762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253342   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795713760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0419 18:59:13.253342   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795756259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253381   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795811856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253416   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795843255Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0419 18:59:13.253416   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795920052Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0419 18:59:13.253455   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795944151Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0419 18:59:13.253489   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796175542Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0419 18:59:13.253561   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796194141Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0419 18:59:13.253561   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796263238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.253611   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796305336Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0419 18:59:13.253646   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796319336Z" level=info msg="NRI interface is disabled by configuration."
	I0419 18:59:13.253683   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.797416591Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0419 18:59:13.253683   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.797499188Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0419 18:59:13.253717   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.797659381Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0419 18:59:13.253717   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.798178860Z" level=info msg="containerd successfully booted in 0.054054s"
	I0419 18:59:13.253788   14960 command_runner.go:130] > Apr 20 01:57:19 multinode-348000 dockerd[657]: time="2024-04-20T01:57:19.782299514Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0419 18:59:13.253788   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.015692930Z" level=info msg="Loading containers: start."
	I0419 18:59:13.253826   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.458486133Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0419 18:59:13.253859   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.551244732Z" level=info msg="Loading containers: done."
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.579065252Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.579904847Z" level=info msg="Daemon has completed initialization"
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.637363974Z" level=info msg="API listen on [::]:2376"
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 systemd[1]: Started Docker Application Container Engine.
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.639403561Z" level=info msg="API listen on /var/run/docker.sock"
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.472939019Z" level=info msg="Processing signal 'terminated'"
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 systemd[1]: Stopping Docker Application Container Engine...
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.475778002Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.476696029Z" level=info msg="Daemon shutdown complete"
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.476992338Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.477157542Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 systemd[1]: docker.service: Deactivated successfully.
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 systemd[1]: Stopped Docker Application Container Engine.
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 systemd[1]: Starting Docker Application Container Engine...
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:47.551071055Z" level=info msg="Starting up"
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:47.552229889Z" level=info msg="containerd not running, starting managed containerd"
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:47.555196776Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1058
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.593728507Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623742487Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623851391Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623939793Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623957394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624003795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624024296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624225802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624329205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624352205Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624363806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624391206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624622913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.627825907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:13.253898   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.627876709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:13.254479   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628096615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:13.254529   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628227419Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0419 18:59:13.254529   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628259620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0419 18:59:13.254529   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628280321Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0419 18:59:13.254529   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628292621Z" level=info msg="metadata content store policy set" policy=shared
	I0419 18:59:13.254529   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628514127Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0419 18:59:13.254634   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628716033Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0419 18:59:13.254634   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628764035Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0419 18:59:13.254676   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628783935Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0419 18:59:13.254676   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628872138Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0419 18:59:13.254729   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628938240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0419 18:59:13.254729   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.629513057Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0419 18:59:13.254767   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.629754764Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0419 18:59:13.254809   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.629936569Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0419 18:59:13.254849   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630060973Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0419 18:59:13.254849   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630086474Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.254892   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630105074Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.254892   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630122275Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.254938   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630140375Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.254938   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630157976Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.254980   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630174076Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.255019   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630191277Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.255064   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630206077Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0419 18:59:13.255103   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630234378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255103   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630252178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255138   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630267579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630283379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630298980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630314780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630328781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630360082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630377682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630410083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630423583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630455984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630487185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630505186Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630528987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630643490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630666391Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630895497Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630922398Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630934798Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630945799Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.631020001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.631067102Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.631083303Z" level=info msg="NRI interface is disabled by configuration."
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632230736Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632319639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0419 18:59:13.255176   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632396541Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0419 18:59:13.255807   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632594347Z" level=info msg="containerd successfully booted in 0.042627s"
	I0419 18:59:13.255807   14960 command_runner.go:130] > Apr 20 01:57:48 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:48.604760074Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0419 18:59:13.255807   14960 command_runner.go:130] > Apr 20 01:57:48 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:48.637031921Z" level=info msg="Loading containers: start."
	I0419 18:59:13.255807   14960 command_runner.go:130] > Apr 20 01:57:48 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:48.936729515Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0419 18:59:13.255925   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.021589305Z" level=info msg="Loading containers: done."
	I0419 18:59:13.255925   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.048182786Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	I0419 18:59:13.255925   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.048316590Z" level=info msg="Daemon has completed initialization"
	I0419 18:59:13.255925   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.095567976Z" level=info msg="API listen on /var/run/docker.sock"
	I0419 18:59:13.256021   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 systemd[1]: Started Docker Application Container Engine.
	I0419 18:59:13.256021   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.098304756Z" level=info msg="API listen on [::]:2376"
	I0419 18:59:13.256021   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:13.256021   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:13.256116   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:13.256116   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:13.256116   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0419 18:59:13.256116   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Loaded network plugin cni"
	I0419 18:59:13.256195   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0419 18:59:13.256195   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0419 18:59:13.256195   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0419 18:59:13.256195   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0419 18:59:13.256195   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Start cri-dockerd grpc backend"
	I0419 18:59:13.256274   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0419 18:59:13.256274   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-xnz2k_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"476e3efb38684054cbc21c027cf1ddd3f9ca47bb829786f8636fd877fd4b2f81\""
	I0419 18:59:13.256353   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-7w477_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"2dd294415aae178d6b9bed0368d49bedc6d0afa8f5b9ad0011c73ffcb2c24b3c\""
	I0419 18:59:13.256353   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.930297132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.256353   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.930785146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.256430   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.930860749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.256430   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.931659072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.256507   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002064338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.256507   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002134840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.256507   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002149541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.256584   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002292345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.256584   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e8baa597c1467ae8c3a1ce9abf0a378ddcffed5a93f7b41dddb4ce4511320dfd/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:13.256661   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151299517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.256661   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151377019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.256661   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151407720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.256738   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151504323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.256738   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169004837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.256738   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169190142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.256816   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169211543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.256816   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169324146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.256816   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/118cca57d1f547838d0c2442f2945e9daf9b041170bf162489525286bf3d75c2/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:13.256893   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7052a6f04def38545970026f2934eb29913066396b26eb86f6675e7c0c685db/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:13.256893   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ab9ff1d9068805d6a2ad10084128436e5b1fcaaa8c64f2f1a5e811455f0f99ee/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:13.256970   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441120322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.256970   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441388229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.256970   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441493933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257047   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441783141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257047   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.541538868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257047   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.541743874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257123   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.541768275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257123   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.542244089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257199   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.635958239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257199   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.636305549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257199   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.636479754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257276   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.636776363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257276   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.703176711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257352   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.703241613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257352   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.703253713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257352   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.704949863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257459   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:00Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0419 18:59:13.257459   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.682944236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257530   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.683066839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257530   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.683087340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257530   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.683203743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257605   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.775229244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257605   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.775527153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257605   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.775671457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257677   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.776004967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257677   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.791300015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257677   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.791478721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257750   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.791611925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257750   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.792335946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257750   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/09f65a695303814b61d199dd53caa1efad532c76b04176a404206b865fd6b38a/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:13.257822   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5472c1fba3929b8a427273be545db7fb7df3c0ffbf035e24a1d3b71418b9e031/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:13.257822   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.150688061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257899   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.150834665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257899   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.151084573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257940   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.152395011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.341191051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.341388457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.341505460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.342279283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b5a777eba295e3b640d8d8a60aedcc20243d0f4a6fc4d3f3391b06fc6de0247a/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.851490425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.852225247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.852338750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.853459583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1052]: time="2024-04-20T01:58:23.324898945Z" level=info msg="ignoring event" container=f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:23.325982179Z" level=info msg="shim disconnected" id=f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919 namespace=moby
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:23.326071582Z" level=warning msg="cleaning up after shim disconnected" id=f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919 namespace=moby
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:23.326085983Z" level=info msg="cleaning up dead shim" namespace=moby
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1052]: time="2024-04-20T01:58:32.676558128Z" level=info msg="ignoring event" container=45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:32.681127769Z" level=info msg="shim disconnected" id=45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702 namespace=moby
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:32.681255073Z" level=warning msg="cleaning up after shim disconnected" id=45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702 namespace=moby
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:32.681323075Z" level=info msg="cleaning up dead shim" namespace=moby
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356286643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356444648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356547351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356850260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.371313874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.372274603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.372497010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.257969   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.373020725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.258489   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.468874089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.258489   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.469011493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.258489   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.469033394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.258489   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.469948221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.258489   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.577907307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.258489   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.578194516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.258489   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.578360121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.258620   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.578991939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.258658   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:59:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f28a1e746a9b438367a8e05d2e1a085afb4abec4174f7a7eb80549e02b95047a/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:59:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/75ff9f4e9dde29a997e4321dd3659a2ce7d479a75826a78c4d3525f1eb5f696f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.046055457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.046333943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.046360842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.047301594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.170326341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.170444835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.170467134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.171235195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:12 multinode-348000 dockerd[1052]: 2024/04/20 01:59:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.258690   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.259226   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.259226   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.259226   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.259226   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:13.293582   14960 logs.go:123] Gathering logs for container status ...
	I0419 18:59:13.293582   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 18:59:13.374555   14960 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0419 18:59:13.374555   14960 command_runner.go:130] > d608b74b0597f       8c811b4aec35f                                                                                         8 seconds ago        Running             busybox                   1                   75ff9f4e9dde2       busybox-fc5497c4f-xnz2k
	I0419 18:59:13.374555   14960 command_runner.go:130] > 352cf21a3e202       cbb01a7bd410d                                                                                         8 seconds ago        Running             coredns                   1                   f28a1e746a9b4       coredns-7db6d8ff4d-7w477
	I0419 18:59:13.374555   14960 command_runner.go:130] > c6f350bee7762       6e38f40d628db                                                                                         28 seconds ago       Running             storage-provisioner       2                   5472c1fba3929       storage-provisioner
	I0419 18:59:13.374555   14960 command_runner.go:130] > ae0b21715f861       4950bb10b3f87                                                                                         37 seconds ago       Running             kindnet-cni               2                   b5a777eba295e       kindnet-s4fsr
	I0419 18:59:13.374555   14960 command_runner.go:130] > f8c798c994078       4950bb10b3f87                                                                                         About a minute ago   Exited              kindnet-cni               1                   b5a777eba295e       kindnet-s4fsr
	I0419 18:59:13.375294   14960 command_runner.go:130] > 45383c4290ad1       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   5472c1fba3929       storage-provisioner
	I0419 18:59:13.375294   14960 command_runner.go:130] > e438af0f1ec9e       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   09f65a6953038       kube-proxy-kj76x
	I0419 18:59:13.375294   14960 command_runner.go:130] > 2deabe4dbdf41       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   ab9ff1d906880       etcd-multinode-348000
	I0419 18:59:13.375433   14960 command_runner.go:130] > bd3aa93bac25b       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   d7052a6f04def       kube-apiserver-multinode-348000
	I0419 18:59:13.375471   14960 command_runner.go:130] > b67f2295d26ca       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   118cca57d1f54       kube-controller-manager-multinode-348000
	I0419 18:59:13.375497   14960 command_runner.go:130] > d57aee391c146       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   e8baa597c1467       kube-scheduler-multinode-348000
	I0419 18:59:13.375544   14960 command_runner.go:130] > d8afb3e1fb946       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   476e3efb38684       busybox-fc5497c4f-xnz2k
	I0419 18:59:13.375604   14960 command_runner.go:130] > 627b84abf45cd       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   2dd294415aae1       coredns-7db6d8ff4d-7w477
	I0419 18:59:13.375629   14960 command_runner.go:130] > a6586791413d0       a0bf559e280cf                                                                                         23 minutes ago       Exited              kube-proxy                0                   7935893e9f22a       kube-proxy-kj76x
	I0419 18:59:13.375629   14960 command_runner.go:130] > 9638ddcd54285       c7aad43836fa5                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   6e420625b84be       kube-controller-manager-multinode-348000
	I0419 18:59:13.375629   14960 command_runner.go:130] > e476774b8f77e       259c8277fcbbc                                                                                         24 minutes ago       Exited              kube-scheduler            0                   e5d733991bf1a       kube-scheduler-multinode-348000
	I0419 18:59:13.377871   14960 logs.go:123] Gathering logs for etcd [2deabe4dbdf4] ...
	I0419 18:59:13.378001   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2deabe4dbdf4"
	I0419 18:59:13.408304   14960 command_runner.go:130] ! {"level":"warn","ts":"2024-04-20T01:57:57.046906Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0419 18:59:13.408523   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.051203Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.19.42.24:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.19.42.24:2380","--initial-cluster=multinode-348000=https://172.19.42.24:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.19.42.24:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.19.42.24:2380","--name=multinode-348000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-ref
resh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0419 18:59:13.408523   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.05132Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0419 18:59:13.408641   14960 command_runner.go:130] ! {"level":"warn","ts":"2024-04-20T01:57:57.053068Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0419 18:59:13.408641   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.053085Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.19.42.24:2380"]}
	I0419 18:59:13.408714   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.053402Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0419 18:59:13.408756   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.06821Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.19.42.24:2379"]}
	I0419 18:59:13.408836   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.071769Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-348000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.19.42.24:2380"],"listen-peer-urls":["https://172.19.42.24:2380"],"advertise-client-urls":["https://172.19.42.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.42.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cl
uster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0419 18:59:13.408943   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.117145Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"37.959314ms"}
	I0419 18:59:13.408969   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.163657Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0419 18:59:13.409000   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186114Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","commit-index":1996}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c switched to configuration voters=()"}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became follower at term 2"}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 4fba18389b33806c [peers: [], term: 2, commit: 1996, applied: 0, lastindex: 1996, lastterm: 2]"}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"warn","ts":"2024-04-20T01:57:57.204366Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.210889Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1364}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.22333Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1726}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.233905Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.247902Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"4fba18389b33806c","timeout":"7s"}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.252957Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"4fba18389b33806c"}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.253239Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"4fba18389b33806c","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0419 18:59:13.409056   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.257675Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0419 18:59:13.409630   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.259962Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0419 18:59:13.409630   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.260237Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0419 18:59:13.409630   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.26046Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0419 18:59:13.409630   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c switched to configuration voters=(5744930906065567852)"}
	I0419 18:59:13.409779   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264281Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","added-peer-id":"4fba18389b33806c","added-peer-peer-urls":["https://172.19.42.231:2380"]}
	I0419 18:59:13.409839   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264439Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","cluster-version":"3.5"}
	I0419 18:59:13.409839   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264612Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0419 18:59:13.409839   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.271976Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0419 18:59:13.409839   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.273753Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4fba18389b33806c","initial-advertise-peer-urls":["https://172.19.42.24:2380"],"listen-peer-urls":["https://172.19.42.24:2380"],"advertise-client-urls":["https://172.19.42.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.42.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0419 18:59:13.410177   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.27526Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0419 18:59:13.410177   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.27622Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.42.24:2380"}
	I0419 18:59:13.410177   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.277207Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.42.24:2380"}
	I0419 18:59:13.410255   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c is starting a new election at term 2"}
	I0419 18:59:13.410282   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became pre-candidate at term 2"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c received MsgPreVoteResp from 4fba18389b33806c at term 2"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became candidate at term 3"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c received MsgVoteResp from 4fba18389b33806c at term 3"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became leader at term 3"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4fba18389b33806c elected leader 4fba18389b33806c at term 3"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.994477Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4fba18389b33806c","local-member-attributes":"{Name:multinode-348000 ClientURLs:[https://172.19.42.24:2379]}","request-path":"/0/members/4fba18389b33806c/attributes","cluster-id":"dca2ede42d67bc1c","publish-timeout":"7s"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.994493Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.994512Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.996572Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.996617Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.999043Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.42.24:2379"}
	I0419 18:59:13.410319   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.999341Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0419 18:59:13.417189   14960 logs.go:123] Gathering logs for coredns [627b84abf45c] ...
	I0419 18:59:13.418082   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627b84abf45c"
	I0419 18:59:13.455979   14960 command_runner.go:130] > .:53
	I0419 18:59:13.456857   14960 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93714cfd58e203ac2baa48ea9c7b435951d2a9faed7a5c70b4e84c89c6c1fe4c1dfa41f14b3ebf0f5941dade673a82eaad960061e673dd78dcb856db3393b39d
	I0419 18:59:13.456857   14960 command_runner.go:130] > CoreDNS-1.11.1
	I0419 18:59:13.456857   14960 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0419 18:59:13.456857   14960 command_runner.go:130] > [INFO] 127.0.0.1:37904 - 37003 "HINFO IN 1336380353163369387.5260466772500757990. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.053891439s
	I0419 18:59:13.456857   14960 command_runner.go:130] > [INFO] 10.244.1.2:47846 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002913s
	I0419 18:59:13.456936   14960 command_runner.go:130] > [INFO] 10.244.1.2:60728 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.118385602s
	I0419 18:59:13.456936   14960 command_runner.go:130] > [INFO] 10.244.1.2:48827 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.043741711s
	I0419 18:59:13.456979   14960 command_runner.go:130] > [INFO] 10.244.1.2:57126 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.111854404s
	I0419 18:59:13.456979   14960 command_runner.go:130] > [INFO] 10.244.0.3:44468 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001971s
	I0419 18:59:13.456979   14960 command_runner.go:130] > [INFO] 10.244.0.3:58477 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.002287005s
	I0419 18:59:13.457024   14960 command_runner.go:130] > [INFO] 10.244.0.3:39825 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000198301s
	I0419 18:59:13.457024   14960 command_runner.go:130] > [INFO] 10.244.0.3:54956 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000604s
	I0419 18:59:13.457049   14960 command_runner.go:130] > [INFO] 10.244.1.2:48593 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001261s
	I0419 18:59:13.457049   14960 command_runner.go:130] > [INFO] 10.244.1.2:58743 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.027871268s
	I0419 18:59:13.457098   14960 command_runner.go:130] > [INFO] 10.244.1.2:44517 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002274s
	I0419 18:59:13.457098   14960 command_runner.go:130] > [INFO] 10.244.1.2:35998 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000219501s
	I0419 18:59:13.457158   14960 command_runner.go:130] > [INFO] 10.244.1.2:58770 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012982932s
	I0419 18:59:13.457158   14960 command_runner.go:130] > [INFO] 10.244.1.2:55456 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174201s
	I0419 18:59:13.457199   14960 command_runner.go:130] > [INFO] 10.244.1.2:59031 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001304s
	I0419 18:59:13.457247   14960 command_runner.go:130] > [INFO] 10.244.1.2:41687 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000198401s
	I0419 18:59:13.457247   14960 command_runner.go:130] > [INFO] 10.244.0.3:46929 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003044s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:35877 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000325701s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:53705 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000318601s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:40560 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164401s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:53239 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001239s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:39754 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001464s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:41397 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001668s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:49126 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001646s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.1.2:37850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115501s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.1.2:44063 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001443s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.1.2:39924 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000607s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.1.2:53244 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000622s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:52017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001879s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:55488 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000814s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:57536 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000778s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:45454 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001788s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.1.2:52247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001095s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.1.2:46954 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001143s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.1.2:47574 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098701s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.1.2:36658 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000170301s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:35421 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001002s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:41995 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132201s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:36431 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001956s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] 10.244.0.3:38168 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000222s
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0419 18:59:13.457270   14960 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0419 18:59:13.461762   14960 logs.go:123] Gathering logs for kube-scheduler [e476774b8f77] ...
	I0419 18:59:13.461793   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e476774b8f77"
	I0419 18:59:13.493215   14960 command_runner.go:130] ! I0420 01:35:03.474569       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:13.493339   14960 command_runner.go:130] ! W0420 01:35:04.965330       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0419 18:59:13.493339   14960 command_runner.go:130] ! W0420 01:35:04.965379       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:13.493407   14960 command_runner.go:130] ! W0420 01:35:04.965392       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0419 18:59:13.493407   14960 command_runner.go:130] ! W0420 01:35:04.965399       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0419 18:59:13.493466   14960 command_runner.go:130] ! I0420 01:35:05.040739       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0419 18:59:13.493466   14960 command_runner.go:130] ! I0420 01:35:05.040800       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:13.493466   14960 command_runner.go:130] ! I0420 01:35:05.044777       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0419 18:59:13.493528   14960 command_runner.go:130] ! I0420 01:35:05.045192       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 18:59:13.493569   14960 command_runner.go:130] ! I0420 01:35:05.045423       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:13.493598   14960 command_runner.go:130] ! I0420 01:35:05.046180       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:13.493598   14960 command_runner.go:130] ! W0420 01:35:05.063208       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:13.493731   14960 command_runner.go:130] ! E0420 01:35:05.064240       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:13.493731   14960 command_runner.go:130] ! W0420 01:35:05.063609       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.493797   14960 command_runner.go:130] ! E0420 01:35:05.065130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.493821   14960 command_runner.go:130] ! W0420 01:35:05.063676       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! E0420 01:35:05.065433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! W0420 01:35:05.063732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! E0420 01:35:05.065801       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! W0420 01:35:05.063780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! E0420 01:35:05.066820       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! W0420 01:35:05.063927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! E0420 01:35:05.067122       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! W0420 01:35:05.063973       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! E0420 01:35:05.069517       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! W0420 01:35:05.064025       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! E0420 01:35:05.069884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! W0420 01:35:05.064095       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! E0420 01:35:05.070309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! W0420 01:35:05.064163       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! E0420 01:35:05.070884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! W0420 01:35:05.070236       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:13.493850   14960 command_runner.go:130] ! E0420 01:35:05.071293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:13.494384   14960 command_runner.go:130] ! W0420 01:35:05.070677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! E0420 01:35:05.072125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! W0420 01:35:05.070741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! E0420 01:35:05.073528       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! W0420 01:35:05.072410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! E0420 01:35:05.073910       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! W0420 01:35:05.072540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! E0420 01:35:05.074332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! W0420 01:35:05.987809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! E0420 01:35:05.988072       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! W0420 01:35:06.078924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! E0420 01:35:06.079045       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! W0420 01:35:06.146102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! E0420 01:35:06.146225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! W0420 01:35:06.213142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! E0420 01:35:06.213279       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.494476   14960 command_runner.go:130] ! W0420 01:35:06.278808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.495027   14960 command_runner.go:130] ! E0420 01:35:06.279232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.495027   14960 command_runner.go:130] ! W0420 01:35:06.310265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:13.495027   14960 command_runner.go:130] ! E0420 01:35:06.311126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:13.495027   14960 command_runner.go:130] ! W0420 01:35:06.333128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:13.495027   14960 command_runner.go:130] ! E0420 01:35:06.333531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:13.495193   14960 command_runner.go:130] ! W0420 01:35:06.355993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:13.495193   14960 command_runner.go:130] ! E0420 01:35:06.356053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:13.495193   14960 command_runner.go:130] ! W0420 01:35:06.356154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:13.495193   14960 command_runner.go:130] ! E0420 01:35:06.356365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:13.495381   14960 command_runner.go:130] ! W0420 01:35:06.490128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:13.495438   14960 command_runner.go:130] ! E0420 01:35:06.490240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:13.495438   14960 command_runner.go:130] ! W0420 01:35:06.496247       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:13.495499   14960 command_runner.go:130] ! E0420 01:35:06.496709       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:13.495567   14960 command_runner.go:130] ! W0420 01:35:06.552817       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.495567   14960 command_runner.go:130] ! E0420 01:35:06.552917       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.495567   14960 command_runner.go:130] ! W0420 01:35:06.607496       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.495650   14960 command_runner.go:130] ! E0420 01:35:06.607914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:13.495650   14960 command_runner.go:130] ! W0420 01:35:06.608255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:13.495709   14960 command_runner.go:130] ! E0420 01:35:06.608488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:13.495777   14960 command_runner.go:130] ! W0420 01:35:06.623642       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:13.495777   14960 command_runner.go:130] ! E0420 01:35:06.624029       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:13.495834   14960 command_runner.go:130] ! I0420 01:35:09.746203       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:13.495834   14960 command_runner.go:130] ! I0420 01:55:30.893306       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0419 18:59:13.495834   14960 command_runner.go:130] ! I0420 01:55:30.893359       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0419 18:59:13.495913   14960 command_runner.go:130] ! I0420 01:55:30.893732       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 18:59:13.495913   14960 command_runner.go:130] ! E0420 01:55:30.894682       1 run.go:74] "command failed" err="finished without leader elect"
	I0419 18:59:13.508294   14960 logs.go:123] Gathering logs for kube-controller-manager [b67f2295d26c] ...
	I0419 18:59:13.508294   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67f2295d26c"
	I0419 18:59:13.544686   14960 command_runner.go:130] ! I0420 01:57:58.124915       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:13.545016   14960 command_runner.go:130] ! I0420 01:57:58.572589       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0419 18:59:13.545016   14960 command_runner.go:130] ! I0420 01:57:58.572759       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:13.545016   14960 command_runner.go:130] ! I0420 01:57:58.576545       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:13.545016   14960 command_runner.go:130] ! I0420 01:57:58.576765       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:13.545097   14960 command_runner.go:130] ! I0420 01:57:58.577138       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0419 18:59:13.545097   14960 command_runner.go:130] ! I0420 01:57:58.577308       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:13.545097   14960 command_runner.go:130] ! I0420 01:58:02.671844       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0419 18:59:13.545097   14960 command_runner.go:130] ! I0420 01:58:02.672396       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0419 18:59:13.545097   14960 command_runner.go:130] ! I0420 01:58:02.683222       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0419 18:59:13.545169   14960 command_runner.go:130] ! I0420 01:58:02.683502       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0419 18:59:13.545169   14960 command_runner.go:130] ! I0420 01:58:02.683748       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0419 18:59:13.545169   14960 command_runner.go:130] ! I0420 01:58:02.684992       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0419 18:59:13.545169   14960 command_runner.go:130] ! I0420 01:58:02.685159       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.689572       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.693653       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.694118       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.694295       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.695565       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.695757       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.700089       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.700328       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.700370       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.708704       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.712057       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.712325       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.712551       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0419 18:59:13.545237   14960 command_runner.go:130] ! E0420 01:58:02.728628       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! I0420 01:58:02.728672       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0419 18:59:13.545237   14960 command_runner.go:130] ! E0420 01:58:02.742147       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0419 18:59:13.545775   14960 command_runner.go:130] ! I0420 01:58:02.742194       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0419 18:59:13.545775   14960 command_runner.go:130] ! I0420 01:58:02.742206       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0419 18:59:13.545876   14960 command_runner.go:130] ! I0420 01:58:02.748098       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0419 18:59:13.545876   14960 command_runner.go:130] ! I0420 01:58:02.748399       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0419 18:59:13.545876   14960 command_runner.go:130] ! I0420 01:58:02.748420       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.752218       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.752332       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.752344       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.765569       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.765610       1 shared_informer.go:313] Waiting for caches to sync for job
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.765645       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.772658       1 shared_informer.go:320] Caches are synced for tokens
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.773270       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.773287       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.786700       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.788042       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.799412       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.804126       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.804238       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.814226       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.818062       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.818127       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.868296       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.868361       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.868379       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.870217       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.873404       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.873440       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! W0420 01:58:02.873461       1 shared_informer.go:597] resyncPeriod 18h17m32.022460908s is smaller than resyncCheckPeriod 19h9m29.930546571s and the informer has already started. Changing it to 19h9m29.930546571s
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.873587       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.873612       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.873690       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0419 18:59:13.545930   14960 command_runner.go:130] ! I0420 01:58:02.873722       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0419 18:59:13.546454   14960 command_runner.go:130] ! I0420 01:58:02.873768       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0419 18:59:13.546454   14960 command_runner.go:130] ! I0420 01:58:02.873784       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.873852       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.873883       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.873963       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.873989       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.874019       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.874045       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.874084       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.874104       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.874180       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.874255       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.874269       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.874289       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.910217       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.910746       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.912220       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.928174       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.928508       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.928473       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.929874       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.931641       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.931894       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.932890       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.934333       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.934546       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.934881       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.939106       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:02.939460       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:12.968845       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:12.968916       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:12.969733       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:12.969944       1 shared_informer.go:313] Waiting for caches to sync for node
	I0419 18:59:13.546495   14960 command_runner.go:130] ! I0420 01:58:12.975888       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0419 18:59:13.547019   14960 command_runner.go:130] ! I0420 01:58:12.977148       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0419 18:59:13.547019   14960 command_runner.go:130] ! I0420 01:58:12.977216       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0419 18:59:13.547019   14960 command_runner.go:130] ! I0420 01:58:12.978712       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0419 18:59:13.547019   14960 command_runner.go:130] ! I0420 01:58:12.979007       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0419 18:59:13.547096   14960 command_runner.go:130] ! I0420 01:58:12.979040       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0419 18:59:13.547121   14960 command_runner.go:130] ! I0420 01:58:12.982094       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0419 18:59:13.547152   14960 command_runner.go:130] ! I0420 01:58:12.982639       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0419 18:59:13.547152   14960 command_runner.go:130] ! I0420 01:58:12.982957       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0419 18:59:13.547152   14960 command_runner.go:130] ! I0420 01:58:13.032307       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0419 18:59:13.547190   14960 command_runner.go:130] ! I0420 01:58:13.032749       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0419 18:59:13.547190   14960 command_runner.go:130] ! I0420 01:58:13.035306       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.036848       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.037653       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.038965       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.039366       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.039352       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.040679       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.040782       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.040908       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.041738       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.041781       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.042295       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.041839       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.042314       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.041850       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.042715       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.046953       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.047617       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.047660       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.047670       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.050144       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.050286       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.050982       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.051033       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.051061       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.054294       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.054709       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0419 18:59:13.547254   14960 command_runner.go:130] ! I0420 01:58:13.054987       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0419 18:59:13.547776   14960 command_runner.go:130] ! I0420 01:58:13.057961       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0419 18:59:13.547776   14960 command_runner.go:130] ! I0420 01:58:13.058399       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0419 18:59:13.547776   14960 command_runner.go:130] ! I0420 01:58:13.058606       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0419 18:59:13.547776   14960 command_runner.go:130] ! I0420 01:58:13.060766       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:13.547776   14960 command_runner.go:130] ! I0420 01:58:13.061307       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0419 18:59:13.547776   14960 command_runner.go:130] ! I0420 01:58:13.060852       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:13.547921   14960 command_runner.go:130] ! I0420 01:58:13.061691       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0419 18:59:13.547945   14960 command_runner.go:130] ! I0420 01:58:13.064061       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0419 18:59:13.547945   14960 command_runner.go:130] ! I0420 01:58:13.064698       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0419 18:59:13.547945   14960 command_runner.go:130] ! I0420 01:58:13.065134       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0419 18:59:13.548006   14960 command_runner.go:130] ! I0420 01:58:13.067945       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0419 18:59:13.548006   14960 command_runner.go:130] ! I0420 01:58:13.068315       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0419 18:59:13.548006   14960 command_runner.go:130] ! I0420 01:58:13.068613       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0419 18:59:13.548052   14960 command_runner.go:130] ! I0420 01:58:13.077312       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0419 18:59:13.548089   14960 command_runner.go:130] ! I0420 01:58:13.077939       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0419 18:59:13.548089   14960 command_runner.go:130] ! I0420 01:58:13.078050       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.078623       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.083275       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.083591       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.083702       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.090751       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.091149       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.091393       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.091591       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.096868       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.097085       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.100720       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.101287       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.101375       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.103459       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.106949       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.107026       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.116002       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.139685       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.148344       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.152489       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.140934       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.151083       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000\" does not exist"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.141105       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.156086       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.156676       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m02\" does not exist"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.156750       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m03\" does not exist"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.156865       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.548129   14960 command_runner.go:130] ! I0420 01:58:13.142425       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0419 18:59:13.548654   14960 command_runner.go:130] ! I0420 01:58:13.157020       1 shared_informer.go:320] Caches are synced for expand
	I0419 18:59:13.548654   14960 command_runner.go:130] ! I0420 01:58:13.159992       1 shared_informer.go:320] Caches are synced for ephemeral
	I0419 18:59:13.548654   14960 command_runner.go:130] ! I0420 01:58:13.145957       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:13.548654   14960 command_runner.go:130] ! I0420 01:58:13.162320       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0419 18:59:13.548654   14960 command_runner.go:130] ! I0420 01:58:13.165325       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0419 18:59:13.548654   14960 command_runner.go:130] ! I0420 01:58:13.165759       1 shared_informer.go:320] Caches are synced for job
	I0419 18:59:13.548654   14960 command_runner.go:130] ! I0420 01:58:13.169537       1 shared_informer.go:320] Caches are synced for service account
	I0419 18:59:13.548654   14960 command_runner.go:130] ! I0420 01:58:13.171293       1 shared_informer.go:320] Caches are synced for node
	I0419 18:59:13.548654   14960 command_runner.go:130] ! I0420 01:58:13.178178       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.178222       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.178230       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.178237       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.178270       1 shared_informer.go:320] Caches are synced for attach detach
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.179699       1 shared_informer.go:320] Caches are synced for PV protection
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.183856       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.183905       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.188521       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.195859       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.200417       1 shared_informer.go:320] Caches are synced for crt configmap
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.201881       1 shared_informer.go:320] Caches are synced for persistent volume
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.204647       1 shared_informer.go:320] Caches are synced for endpoint
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.207356       1 shared_informer.go:320] Caches are synced for PVC protection
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.213532       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.214173       1 shared_informer.go:320] Caches are synced for namespace
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.219105       1 shared_informer.go:320] Caches are synced for GC
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.228919       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.535929ms"
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.230155       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.901µs"
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.230170       1 shared_informer.go:320] Caches are synced for HPA
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.234086       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.236046       1 shared_informer.go:320] Caches are synced for TTL
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.240266       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.682408ms"
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.240992       1 shared_informer.go:320] Caches are synced for deployment
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.243741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="114.104µs"
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.248776       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.252859       1 shared_informer.go:320] Caches are synced for daemon sets
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.253008       1 shared_informer.go:320] Caches are synced for taint
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.259997       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.297486       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000"
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.297542       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m02"
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.297627       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m03"
	I0419 18:59:13.548778   14960 command_runner.go:130] ! I0420 01:58:13.297865       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0419 18:59:13.549304   14960 command_runner.go:130] ! I0420 01:58:13.335459       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0419 18:59:13.549304   14960 command_runner.go:130] ! I0420 01:58:13.374436       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:13.549304   14960 command_runner.go:130] ! I0420 01:58:13.389294       1 shared_informer.go:320] Caches are synced for cronjob
	I0419 18:59:13.549347   14960 command_runner.go:130] ! I0420 01:58:13.392315       1 shared_informer.go:320] Caches are synced for disruption
	I0419 18:59:13.549347   14960 command_runner.go:130] ! I0420 01:58:13.397172       1 shared_informer.go:320] Caches are synced for stateful set
	I0419 18:59:13.549347   14960 command_runner.go:130] ! I0420 01:58:13.416186       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:13.549347   14960 command_runner.go:130] ! I0420 01:58:13.857437       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:13.549347   14960 command_runner.go:130] ! I0420 01:58:13.878325       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:13.549347   14960 command_runner.go:130] ! I0420 01:58:13.878534       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0419 18:59:13.549441   14960 command_runner.go:130] ! I0420 01:58:40.290168       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:13.549462   14960 command_runner.go:130] ! I0420 01:58:53.395955       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.694507ms"
	I0419 18:59:13.549462   14960 command_runner.go:130] ! I0420 01:58:53.396146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.003µs"
	I0419 18:59:13.549462   14960 command_runner.go:130] ! I0420 01:59:07.033370       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.713655ms"
	I0419 18:59:13.549529   14960 command_runner.go:130] ! I0420 01:59:07.033533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.092µs"
	I0419 18:59:13.549603   14960 command_runner.go:130] ! I0420 01:59:07.047220       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.391µs"
	I0419 18:59:13.549603   14960 command_runner.go:130] ! I0420 01:59:07.121391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.338984ms"
	I0419 18:59:13.549603   14960 command_runner.go:130] ! I0420 01:59:07.121503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.691µs"
	I0419 18:59:13.566217   14960 logs.go:123] Gathering logs for dmesg ...
	I0419 18:59:13.566217   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 18:59:13.592389   14960 command_runner.go:130] > [Apr20 01:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0419 18:59:13.592476   14960 command_runner.go:130] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0419 18:59:13.592476   14960 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0419 18:59:13.592476   14960 command_runner.go:130] > [  +0.134823] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0419 18:59:13.592572   14960 command_runner.go:130] > [  +0.023006] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0419 18:59:13.592572   14960 command_runner.go:130] > [  +0.000006] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0419 18:59:13.592572   14960 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.065433] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.022829] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0419 18:59:13.592628   14960 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +5.461945] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.733998] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +1.817887] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +7.031305] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0419 18:59:13.592628   14960 command_runner.go:130] > [Apr20 01:57] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.209815] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [ +26.622359] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.115734] kauditd_printk_skb: 73 callbacks suppressed
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.605928] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.209234] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.243987] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +2.954231] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.209781] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.225214] systemd-fstab-generator[1255]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.313735] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.929646] systemd-fstab-generator[1383]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +0.108494] kauditd_printk_skb: 205 callbacks suppressed
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +3.650728] systemd-fstab-generator[1520]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +1.371725] kauditd_printk_skb: 49 callbacks suppressed
	I0419 18:59:13.592628   14960 command_runner.go:130] > [Apr20 01:58] kauditd_printk_skb: 25 callbacks suppressed
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +3.878920] systemd-fstab-generator[2324]: Ignoring "noauto" option for root device
	I0419 18:59:13.592628   14960 command_runner.go:130] > [  +7.552702] kauditd_printk_skb: 70 callbacks suppressed
	I0419 18:59:13.594819   14960 logs.go:123] Gathering logs for describe nodes ...
	I0419 18:59:13.594819   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 18:59:13.827694   14960 command_runner.go:130] > Name:               multinode-348000
	I0419 18:59:13.827694   14960 command_runner.go:130] > Roles:              control-plane
	I0419 18:59:13.827694   14960 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     kubernetes.io/hostname=multinode-348000
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     kubernetes.io/os=linux
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     minikube.k8s.io/name=multinode-348000
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_04_19T18_35_09_0700
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0419 18:59:13.827694   14960 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0419 18:59:13.827694   14960 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0419 18:59:13.827694   14960 command_runner.go:130] > CreationTimestamp:  Sat, 20 Apr 2024 01:35:05 +0000
	I0419 18:59:13.827694   14960 command_runner.go:130] > Taints:             <none>
	I0419 18:59:13.827694   14960 command_runner.go:130] > Unschedulable:      false
	I0419 18:59:13.827694   14960 command_runner.go:130] > Lease:
	I0419 18:59:13.827694   14960 command_runner.go:130] >   HolderIdentity:  multinode-348000
	I0419 18:59:13.827694   14960 command_runner.go:130] >   AcquireTime:     <unset>
	I0419 18:59:13.827694   14960 command_runner.go:130] >   RenewTime:       Sat, 20 Apr 2024 01:59:11 +0000
	I0419 18:59:13.827694   14960 command_runner.go:130] > Conditions:
	I0419 18:59:13.827694   14960 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0419 18:59:13.827694   14960 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0419 18:59:13.827694   14960 command_runner.go:130] >   MemoryPressure   False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0419 18:59:13.827694   14960 command_runner.go:130] >   DiskPressure     False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0419 18:59:13.828224   14960 command_runner.go:130] >   PIDPressure      False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0419 18:59:13.828224   14960 command_runner.go:130] >   Ready            True    Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:58:40 +0000   KubeletReady                 kubelet is posting ready status
	I0419 18:59:13.828309   14960 command_runner.go:130] > Addresses:
	I0419 18:59:13.828309   14960 command_runner.go:130] >   InternalIP:  172.19.42.24
	I0419 18:59:13.828309   14960 command_runner.go:130] >   Hostname:    multinode-348000
	I0419 18:59:13.828309   14960 command_runner.go:130] > Capacity:
	I0419 18:59:13.828309   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:13.828309   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:13.828309   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:13.828383   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:13.828383   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:13.828416   14960 command_runner.go:130] > Allocatable:
	I0419 18:59:13.828416   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:13.828416   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:13.828416   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:13.828470   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:13.828486   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:13.828508   14960 command_runner.go:130] > System Info:
	I0419 18:59:13.828508   14960 command_runner.go:130] >   Machine ID:                 bd21fc8af31a4161a4396c16b70a2fc3
	I0419 18:59:13.828508   14960 command_runner.go:130] >   System UUID:                fdc3fb6e-1818-9a4e-b496-b7ed0124a8e6
	I0419 18:59:13.828508   14960 command_runner.go:130] >   Boot ID:                    047b982b-9f97-4a1a-8f8a-a308f369753b
	I0419 18:59:13.828558   14960 command_runner.go:130] >   Kernel Version:             5.10.207
	I0419 18:59:13.828558   14960 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0419 18:59:13.828558   14960 command_runner.go:130] >   Operating System:           linux
	I0419 18:59:13.828591   14960 command_runner.go:130] >   Architecture:               amd64
	I0419 18:59:13.828591   14960 command_runner.go:130] >   Container Runtime Version:  docker://26.0.1
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0419 18:59:13.828620   14960 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0419 18:59:13.828620   14960 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0419 18:59:13.828620   14960 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0419 18:59:13.828620   14960 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0419 18:59:13.828620   14960 command_runner.go:130] >   default                     busybox-fc5497c4f-xnz2k                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0419 18:59:13.828620   14960 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-7w477                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	I0419 18:59:13.828620   14960 command_runner.go:130] >   kube-system                 etcd-multinode-348000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         72s
	I0419 18:59:13.828620   14960 command_runner.go:130] >   kube-system                 kindnet-s4fsr                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	I0419 18:59:13.828620   14960 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-348000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	I0419 18:59:13.828620   14960 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-348000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0419 18:59:13.828620   14960 command_runner.go:130] >   kube-system                 kube-proxy-kj76x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0419 18:59:13.828620   14960 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-348000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0419 18:59:13.828620   14960 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0419 18:59:13.828620   14960 command_runner.go:130] > Allocated resources:
	I0419 18:59:13.828620   14960 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Resource           Requests     Limits
	I0419 18:59:13.828620   14960 command_runner.go:130] >   --------           --------     ------
	I0419 18:59:13.828620   14960 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0419 18:59:13.828620   14960 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0419 18:59:13.828620   14960 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0419 18:59:13.828620   14960 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0419 18:59:13.828620   14960 command_runner.go:130] > Events:
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0419 18:59:13.828620   14960 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  Starting                 70s                kube-proxy       
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-348000 status is now: NodeHasSufficientPID
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-348000 status is now: NodeHasSufficientMemory
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-348000 status is now: NodeHasNoDiskPressure
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-348000 event: Registered Node multinode-348000 in Controller
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-348000 status is now: NodeReady
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  Starting                 78s                kubelet          Starting kubelet.
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node multinode-348000 status is now: NodeHasSufficientMemory
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node multinode-348000 status is now: NodeHasNoDiskPressure
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     78s (x7 over 78s)  kubelet          Node multinode-348000 status is now: NodeHasSufficientPID
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:13.828620   14960 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-348000 event: Registered Node multinode-348000 in Controller
	I0419 18:59:13.828620   14960 command_runner.go:130] > Name:               multinode-348000-m02
	I0419 18:59:13.828620   14960 command_runner.go:130] > Roles:              <none>
	I0419 18:59:13.829198   14960 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0419 18:59:13.829198   14960 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0419 18:59:13.829198   14960 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0419 18:59:13.829198   14960 command_runner.go:130] >                     kubernetes.io/hostname=multinode-348000-m02
	I0419 18:59:13.829198   14960 command_runner.go:130] >                     kubernetes.io/os=linux
	I0419 18:59:13.829246   14960 command_runner.go:130] >                     minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	I0419 18:59:13.829246   14960 command_runner.go:130] >                     minikube.k8s.io/name=multinode-348000
	I0419 18:59:13.829246   14960 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0419 18:59:13.829246   14960 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_04_19T18_38_19_0700
	I0419 18:59:13.829246   14960 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0419 18:59:13.829246   14960 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0419 18:59:13.829246   14960 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0419 18:59:13.829246   14960 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0419 18:59:13.829246   14960 command_runner.go:130] > CreationTimestamp:  Sat, 20 Apr 2024 01:38:18 +0000
	I0419 18:59:13.829355   14960 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0419 18:59:13.829355   14960 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0419 18:59:13.829355   14960 command_runner.go:130] > Unschedulable:      false
	I0419 18:59:13.829355   14960 command_runner.go:130] > Lease:
	I0419 18:59:13.829396   14960 command_runner.go:130] >   HolderIdentity:  multinode-348000-m02
	I0419 18:59:13.829396   14960 command_runner.go:130] >   AcquireTime:     <unset>
	I0419 18:59:13.829396   14960 command_runner.go:130] >   RenewTime:       Sat, 20 Apr 2024 01:54:49 +0000
	I0419 18:59:13.829396   14960 command_runner.go:130] > Conditions:
	I0419 18:59:13.829449   14960 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0419 18:59:13.829449   14960 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0419 18:59:13.829489   14960 command_runner.go:130] >   MemoryPressure   Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:13.829489   14960 command_runner.go:130] >   DiskPressure     Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:13.829540   14960 command_runner.go:130] >   PIDPressure      Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:13.829540   14960 command_runner.go:130] >   Ready            Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:13.829540   14960 command_runner.go:130] > Addresses:
	I0419 18:59:13.829605   14960 command_runner.go:130] >   InternalIP:  172.19.32.249
	I0419 18:59:13.829605   14960 command_runner.go:130] >   Hostname:    multinode-348000-m02
	I0419 18:59:13.829605   14960 command_runner.go:130] > Capacity:
	I0419 18:59:13.829605   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:13.829648   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:13.829648   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:13.829648   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:13.829648   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:13.829648   14960 command_runner.go:130] > Allocatable:
	I0419 18:59:13.829690   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:13.829729   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:13.829729   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:13.829771   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:13.829771   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:13.829771   14960 command_runner.go:130] > System Info:
	I0419 18:59:13.829809   14960 command_runner.go:130] >   Machine ID:                 ea453a3100b34d789441206109708446
	I0419 18:59:13.829809   14960 command_runner.go:130] >   System UUID:                9f7972f9-8942-ef4f-b0cf-029b405f5832
	I0419 18:59:13.829851   14960 command_runner.go:130] >   Boot ID:                    d8ef37df-1396-47c1-8bea-04667e5bc60b
	I0419 18:59:13.829851   14960 command_runner.go:130] >   Kernel Version:             5.10.207
	I0419 18:59:13.829851   14960 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0419 18:59:13.829851   14960 command_runner.go:130] >   Operating System:           linux
	I0419 18:59:13.829895   14960 command_runner.go:130] >   Architecture:               amd64
	I0419 18:59:13.829895   14960 command_runner.go:130] >   Container Runtime Version:  docker://26.0.1
	I0419 18:59:13.829895   14960 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0419 18:59:13.829895   14960 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0419 18:59:13.829937   14960 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0419 18:59:13.829937   14960 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0419 18:59:13.829974   14960 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0419 18:59:13.829974   14960 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0419 18:59:13.829974   14960 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0419 18:59:13.830016   14960 command_runner.go:130] >   default                     busybox-fc5497c4f-2d5hs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0419 18:59:13.830016   14960 command_runner.go:130] >   kube-system                 kindnet-s98rh              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	I0419 18:59:13.830016   14960 command_runner.go:130] >   kube-system                 kube-proxy-bjv9b           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0419 18:59:13.830062   14960 command_runner.go:130] > Allocated resources:
	I0419 18:59:13.830062   14960 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0419 18:59:13.830062   14960 command_runner.go:130] >   Resource           Requests   Limits
	I0419 18:59:13.830101   14960 command_runner.go:130] >   --------           --------   ------
	I0419 18:59:13.830101   14960 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0419 18:59:13.830101   14960 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0419 18:59:13.830133   14960 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0419 18:59:13.830133   14960 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0419 18:59:13.830170   14960 command_runner.go:130] > Events:
	I0419 18:59:13.830170   14960 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0419 18:59:13.830170   14960 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0419 18:59:13.830170   14960 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0419 18:59:13.830212   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-348000-m02 status is now: NodeHasSufficientMemory
	I0419 18:59:13.830254   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-348000-m02 status is now: NodeHasNoDiskPressure
	I0419 18:59:13.830254   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-348000-m02 status is now: NodeHasSufficientPID
	I0419 18:59:13.830254   14960 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-348000-m02 event: Registered Node multinode-348000-m02 in Controller
	I0419 18:59:13.830299   14960 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-348000-m02 status is now: NodeReady
	I0419 18:59:13.830336   14960 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-348000-m02 event: Registered Node multinode-348000-m02 in Controller
	I0419 18:59:13.830336   14960 command_runner.go:130] >   Normal  NodeNotReady             20s                node-controller  Node multinode-348000-m02 status is now: NodeNotReady
	I0419 18:59:13.830336   14960 command_runner.go:130] > Name:               multinode-348000-m03
	I0419 18:59:13.830336   14960 command_runner.go:130] > Roles:              <none>
	I0419 18:59:13.830378   14960 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0419 18:59:13.830378   14960 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0419 18:59:13.830378   14960 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0419 18:59:13.830414   14960 command_runner.go:130] >                     kubernetes.io/hostname=multinode-348000-m03
	I0419 18:59:13.830414   14960 command_runner.go:130] >                     kubernetes.io/os=linux
	I0419 18:59:13.830414   14960 command_runner.go:130] >                     minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	I0419 18:59:13.830414   14960 command_runner.go:130] >                     minikube.k8s.io/name=multinode-348000
	I0419 18:59:13.830473   14960 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0419 18:59:13.830473   14960 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_04_19T18_53_29_0700
	I0419 18:59:13.830473   14960 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0419 18:59:13.830473   14960 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0419 18:59:13.830473   14960 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0419 18:59:13.830515   14960 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0419 18:59:13.830515   14960 command_runner.go:130] > CreationTimestamp:  Sat, 20 Apr 2024 01:53:28 +0000
	I0419 18:59:13.830515   14960 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0419 18:59:13.830567   14960 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0419 18:59:13.830567   14960 command_runner.go:130] > Unschedulable:      false
	I0419 18:59:13.830567   14960 command_runner.go:130] > Lease:
	I0419 18:59:13.830687   14960 command_runner.go:130] >   HolderIdentity:  multinode-348000-m03
	I0419 18:59:13.830799   14960 command_runner.go:130] >   AcquireTime:     <unset>
	I0419 18:59:13.830799   14960 command_runner.go:130] >   RenewTime:       Sat, 20 Apr 2024 01:54:29 +0000
	I0419 18:59:13.830845   14960 command_runner.go:130] > Conditions:
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0419 18:59:13.830845   14960 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0419 18:59:13.830845   14960 command_runner.go:130] >   MemoryPressure   Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:13.830845   14960 command_runner.go:130] >   DiskPressure     Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:13.830845   14960 command_runner.go:130] >   PIDPressure      Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Ready            Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:13.830845   14960 command_runner.go:130] > Addresses:
	I0419 18:59:13.830845   14960 command_runner.go:130] >   InternalIP:  172.19.37.59
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Hostname:    multinode-348000-m03
	I0419 18:59:13.830845   14960 command_runner.go:130] > Capacity:
	I0419 18:59:13.830845   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:13.830845   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:13.830845   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:13.830845   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:13.830845   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:13.830845   14960 command_runner.go:130] > Allocatable:
	I0419 18:59:13.830845   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:13.830845   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:13.830845   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:13.830845   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:13.830845   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:13.830845   14960 command_runner.go:130] > System Info:
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Machine ID:                 02e45e9bf03f4852a443a43ac6a8538b
	I0419 18:59:13.830845   14960 command_runner.go:130] >   System UUID:                37a43d59-2157-6e44-8d13-6c975ea12fea
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Boot ID:                    404bc64b-d4fc-4c63-a589-8191649bdfaa
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Kernel Version:             5.10.207
	I0419 18:59:13.830845   14960 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Operating System:           linux
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Architecture:               amd64
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Container Runtime Version:  docker://26.0.1
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0419 18:59:13.830845   14960 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0419 18:59:13.830845   14960 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0419 18:59:13.830845   14960 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0419 18:59:13.830845   14960 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0419 18:59:13.830845   14960 command_runner.go:130] >   kube-system                 kindnet-mg8qs       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	I0419 18:59:13.830845   14960 command_runner.go:130] >   kube-system                 kube-proxy-2jjsq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	I0419 18:59:13.830845   14960 command_runner.go:130] > Allocated resources:
	I0419 18:59:13.830845   14960 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Resource           Requests   Limits
	I0419 18:59:13.830845   14960 command_runner.go:130] >   --------           --------   ------
	I0419 18:59:13.830845   14960 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0419 18:59:13.830845   14960 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0419 18:59:13.830845   14960 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0419 18:59:13.830845   14960 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0419 18:59:13.830845   14960 command_runner.go:130] > Events:
	I0419 18:59:13.830845   14960 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0419 18:59:13.830845   14960 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0419 18:59:13.831449   14960 command_runner.go:130] >   Normal  Starting                 5m41s                  kube-proxy       
	I0419 18:59:13.831449   14960 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0419 18:59:13.831449   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:13.831523   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientMemory
	I0419 18:59:13.831639   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-348000-m03 status is now: NodeHasNoDiskPressure
	I0419 18:59:13.831639   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientPID
	I0419 18:59:13.831639   14960 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-348000-m03 status is now: NodeReady
	I0419 18:59:13.831745   14960 command_runner.go:130] >   Normal  Starting                 5m45s                  kubelet          Starting kubelet.
	I0419 18:59:13.831745   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m45s (x2 over 5m45s)  kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientMemory
	I0419 18:59:13.831783   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m45s (x2 over 5m45s)  kubelet          Node multinode-348000-m03 status is now: NodeHasNoDiskPressure
	I0419 18:59:13.831783   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m45s (x2 over 5m45s)  kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientPID
	I0419 18:59:13.831783   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m45s                  kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:13.831849   14960 command_runner.go:130] >   Normal  RegisteredNode           5m41s                  node-controller  Node multinode-348000-m03 event: Registered Node multinode-348000-m03 in Controller
	I0419 18:59:13.831849   14960 command_runner.go:130] >   Normal  NodeReady                5m37s                  kubelet          Node multinode-348000-m03 status is now: NodeReady
	I0419 18:59:13.831849   14960 command_runner.go:130] >   Normal  NodeNotReady             4m                     node-controller  Node multinode-348000-m03 status is now: NodeNotReady
	I0419 18:59:13.831849   14960 command_runner.go:130] >   Normal  RegisteredNode           60s                    node-controller  Node multinode-348000-m03 event: Registered Node multinode-348000-m03 in Controller
	I0419 18:59:13.842310   14960 logs.go:123] Gathering logs for kube-scheduler [d57aee391c14] ...
	I0419 18:59:13.842310   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57aee391c14"
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:57:58.020728       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.771749       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.771906       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.785599       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.785824       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.785929       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.785956       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.785972       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.786046       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.786323       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.786915       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.887091       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.887476       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:13.876122   14960 command_runner.go:130] ! I0420 01:58:00.888293       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0419 18:59:13.878862   14960 logs.go:123] Gathering logs for kindnet [f8c798c99407] ...
	I0419 18:59:13.879010   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c798c99407"
	I0419 18:59:13.908249   14960 command_runner.go:130] ! I0420 01:58:03.441751       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0419 18:59:13.908249   14960 command_runner.go:130] ! I0420 01:58:03.511070       1 main.go:107] hostIP = 172.19.42.24
	I0419 18:59:13.908249   14960 command_runner.go:130] ! podIP = 172.19.42.24
	I0419 18:59:13.908249   14960 command_runner.go:130] ! I0420 01:58:03.513110       1 main.go:116] setting mtu 1500 for CNI 
	I0419 18:59:13.908249   14960 command_runner.go:130] ! I0420 01:58:03.513147       1 main.go:146] kindnetd IP family: "ipv4"
	I0419 18:59:13.908249   14960 command_runner.go:130] ! I0420 01:58:03.513182       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0419 18:59:13.908249   14960 command_runner.go:130] ! I0420 01:58:07.011650       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:13.908249   14960 command_runner.go:130] ! I0420 01:58:10.084231       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:13.908249   14960 command_runner.go:130] ! I0420 01:58:13.156371       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:13.908249   14960 command_runner.go:130] ! I0420 01:58:16.227521       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:13.908249   14960 command_runner.go:130] ! I0420 01:58:19.299385       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:13.908249   14960 command_runner.go:130] ! panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:13.908249   14960 command_runner.go:130] ! goroutine 1 [running]:
	I0419 18:59:13.908249   14960 command_runner.go:130] ! main.main()
	I0419 18:59:13.908249   14960 command_runner.go:130] ! 	/go/src/cmd/kindnetd/main.go:195 +0xd3d
	I0419 18:59:16.416594   14960 api_server.go:253] Checking apiserver healthz at https://172.19.42.24:8443/healthz ...
	I0419 18:59:16.424442   14960 api_server.go:279] https://172.19.42.24:8443/healthz returned 200:
	ok
	I0419 18:59:16.424924   14960 round_trippers.go:463] GET https://172.19.42.24:8443/version
	I0419 18:59:16.424924   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:16.424924   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:16.424924   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:16.426900   14960 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 18:59:16.426900   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:16.426900   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:16.426900   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:16.426900   14960 round_trippers.go:580]     Content-Length: 263
	I0419 18:59:16.426900   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:16 GMT
	I0419 18:59:16.426900   14960 round_trippers.go:580]     Audit-Id: 053dda65-737e-4062-888c-a5c46f4ce2fe
	I0419 18:59:16.426900   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:16.426900   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:16.426900   14960 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0419 18:59:16.426900   14960 api_server.go:141] control plane version: v1.30.0
	I0419 18:59:16.426900   14960 api_server.go:131] duration metric: took 3.8512548s to wait for apiserver health ...
	I0419 18:59:16.426900   14960 system_pods.go:43] waiting for kube-system pods to appear ...
	I0419 18:59:16.441380   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 18:59:16.465666   14960 command_runner.go:130] > bd3aa93bac25
	I0419 18:59:16.466162   14960 logs.go:276] 1 containers: [bd3aa93bac25]
	I0419 18:59:16.477311   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 18:59:16.502783   14960 command_runner.go:130] > 2deabe4dbdf4
	I0419 18:59:16.503808   14960 logs.go:276] 1 containers: [2deabe4dbdf4]
	I0419 18:59:16.514967   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 18:59:16.540811   14960 command_runner.go:130] > 352cf21a3e20
	I0419 18:59:16.540811   14960 command_runner.go:130] > 627b84abf45c
	I0419 18:59:16.541036   14960 logs.go:276] 2 containers: [352cf21a3e20 627b84abf45c]
	I0419 18:59:16.551329   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 18:59:16.578348   14960 command_runner.go:130] > d57aee391c14
	I0419 18:59:16.578348   14960 command_runner.go:130] > e476774b8f77
	I0419 18:59:16.578348   14960 logs.go:276] 2 containers: [d57aee391c14 e476774b8f77]
	I0419 18:59:16.589290   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 18:59:16.615892   14960 command_runner.go:130] > e438af0f1ec9
	I0419 18:59:16.616767   14960 command_runner.go:130] > a6586791413d
	I0419 18:59:16.617040   14960 logs.go:276] 2 containers: [e438af0f1ec9 a6586791413d]
	I0419 18:59:16.627464   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 18:59:16.651894   14960 command_runner.go:130] > b67f2295d26c
	I0419 18:59:16.651964   14960 command_runner.go:130] > 9638ddcd5428
	I0419 18:59:16.651964   14960 logs.go:276] 2 containers: [b67f2295d26c 9638ddcd5428]
	I0419 18:59:16.661909   14960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 18:59:16.688754   14960 command_runner.go:130] > ae0b21715f86
	I0419 18:59:16.688754   14960 command_runner.go:130] > f8c798c99407
	I0419 18:59:16.688917   14960 logs.go:276] 2 containers: [ae0b21715f86 f8c798c99407]
	I0419 18:59:16.688917   14960 logs.go:123] Gathering logs for kube-controller-manager [9638ddcd5428] ...
	I0419 18:59:16.689070   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9638ddcd5428"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:03.372734       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:03.812267       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:03.812307       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:03.816347       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:03.816460       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:03.817145       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:03.817250       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:07.961997       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:07.962027       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:07.977942       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:07.978602       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:07.980093       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:07.989698       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:07.990033       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:07.990321       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:08.005238       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:08.005791       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:08.006985       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:08.018816       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:08.019229       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:08.019480       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0419 18:59:16.722857   14960 command_runner.go:130] ! I0420 01:35:08.046904       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.047815       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.049696       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.050007       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.062049       1 shared_informer.go:320] Caches are synced for tokens
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.065356       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.065873       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.113476       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.114130       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.116086       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.129157       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.129533       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.129568       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.165596       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.166223       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.166242       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.211668       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.211749       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.211766       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.232421       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.232496       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.232934       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.232991       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502058       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502113       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! W0420 01:35:08.502140       1 shared_informer.go:597] resyncPeriod 21h44m16.388395173s is smaller than resyncCheckPeriod 22h35m59.940993284s and the informer has already started. Changing it to 22h35m59.940993284s
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502208       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502278       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502298       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502314       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502330       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502351       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502407       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502437       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502458       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502479       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502501       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! W0420 01:35:08.502514       1 shared_informer.go:597] resyncPeriod 19h4m59.465157498s is smaller than resyncCheckPeriod 22h35m59.940993284s and the informer has already started. Changing it to 22h35m59.940993284s
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502638       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502666       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502684       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502713       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502732       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502771       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502793       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.502820       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.503928       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.503949       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.504053       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.534828       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.534961       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.674769       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.675139       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.675159       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0419 18:59:16.723861   14960 command_runner.go:130] ! I0420 01:35:08.825012       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:08.825352       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:08.825549       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.067591       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.068206       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.068502       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.068578       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.320310       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.320746       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.321134       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.516184       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.516262       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.691568       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.693516       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.693713       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.694525       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.933130       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.933168       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:09.936074       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.217647       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.218375       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.218475       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.267124       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.267436       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.267570       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.268204       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.268422       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0419 18:59:16.724852   14960 command_runner.go:130] ! E0420 01:35:10.316394       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.316683       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.472792       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.472905       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.472918       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.624680       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.624742       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.624753       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.772273       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.772422       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.773389       1 shared_informer.go:313] Waiting for caches to sync for job
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.922317       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.922464       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:10.922478       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.070777       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.071059       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.071119       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.071166       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.071195       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.071205       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.222012       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.222056       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.222746       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.372624       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.372812       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.372965       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.522757       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.522983       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.523000       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.671210       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.671410       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.671429       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.820688       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.821596       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.821935       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0419 18:59:16.724852   14960 command_runner.go:130] ! E0420 01:35:11.971137       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.971301       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.971316       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0419 18:59:16.724852   14960 command_runner.go:130] ! I0420 01:35:11.971323       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.121255       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.121746       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.121947       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.274169       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.274383       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.274402       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.318009       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.318126       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.318164       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.318524       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.318628       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.318650       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.319568       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.319800       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.319996       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.320096       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.320128       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.320161       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:12.320270       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.381189       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.381256       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.381472       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.381508       1 shared_informer.go:313] Waiting for caches to sync for node
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.395580       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.395660       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.396587       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.396886       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.405182       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.428741       1 shared_informer.go:320] Caches are synced for service account
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.430037       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.433041       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.440027       1 shared_informer.go:320] Caches are synced for namespace
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.466474       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.469554       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.477923       1 shared_informer.go:320] Caches are synced for PV protection
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.479748       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.479794       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.480700       1 shared_informer.go:320] Caches are synced for PVC protection
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.492034       1 shared_informer.go:320] Caches are synced for expand
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.492084       1 shared_informer.go:320] Caches are synced for endpoint
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.492130       1 shared_informer.go:320] Caches are synced for job
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.497920       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.498399       1 shared_informer.go:320] Caches are synced for node
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.498473       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.498515       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.498526       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.498531       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.508187       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000\" does not exist"
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.508396       1 shared_informer.go:320] Caches are synced for GC
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.512585       1 shared_informer.go:320] Caches are synced for crt configmap
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.520820       1 shared_informer.go:320] Caches are synced for daemon sets
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.521073       1 shared_informer.go:320] Caches are synced for stateful set
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.521189       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.521223       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.521268       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0419 18:59:16.725890   14960 command_runner.go:130] ! I0420 01:35:22.527709       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.528722       1 shared_informer.go:320] Caches are synced for cronjob
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.528751       1 shared_informer.go:320] Caches are synced for ephemeral
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.528767       1 shared_informer.go:320] Caches are synced for TTL
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.529370       1 shared_informer.go:320] Caches are synced for HPA
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.529414       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.529477       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.529509       1 shared_informer.go:320] Caches are synced for persistent volume
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.552273       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000" podCIDRs=["10.244.0.0/24"]
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.569198       1 shared_informer.go:320] Caches are synced for taint
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.569287       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.569354       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.569429       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.574991       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.590559       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.623057       1 shared_informer.go:320] Caches are synced for deployment
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.623597       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.651041       1 shared_informer.go:320] Caches are synced for disruption
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.699011       1 shared_informer.go:320] Caches are synced for attach detach
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.705303       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:22.706815       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:23.168892       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:23.169115       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:23.179171       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:23.263116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="374.4156ms"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:23.291471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.172623ms"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:23.291547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.106µs"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:23.578182       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="73.803114ms"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:23.630233       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.666311ms"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:23.630467       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="183.125µs"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:36.906373       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="291.116µs"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:36.934151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="76.104µs"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:37.573034       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:39.217159       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.488µs"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:39.265403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.862669ms"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:35:39.266023       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="552.786µs"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:38:18.575680       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m02\" does not exist"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:38:18.590900       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m02" podCIDRs=["10.244.1.0/24"]
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:38:22.613051       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m02"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:38:37.669535       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:39:03.031296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.090021ms"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:39:03.053897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.363721ms"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:39:03.054543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.499µs"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:39:05.783927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.434034ms"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:39:05.784276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="108.901µs"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:39:07.103598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.163039ms"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:39:07.104054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.4µs"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:42:52.390190       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:42:52.390530       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m03\" does not exist"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:42:52.403944       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m03" podCIDRs=["10.244.2.0/24"]
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:42:52.676079       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m03"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:43:11.211743       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.726876   14960 command_runner.go:130] ! I0420 01:50:42.818871       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.727856   14960 command_runner.go:130] ! I0420 01:53:22.621370       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.727856   14960 command_runner.go:130] ! I0420 01:53:28.752017       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m03\" does not exist"
	I0419 18:59:16.727856   14960 command_runner.go:130] ! I0420 01:53:28.753300       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.727856   14960 command_runner.go:130] ! I0420 01:53:28.789161       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m03" podCIDRs=["10.244.3.0/24"]
	I0419 18:59:16.727856   14960 command_runner.go:130] ! I0420 01:53:36.097701       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m03"
	I0419 18:59:16.727856   14960 command_runner.go:130] ! I0420 01:55:13.205537       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.745853   14960 logs.go:123] Gathering logs for dmesg ...
	I0419 18:59:16.745853   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 18:59:16.771889   14960 command_runner.go:130] > [Apr20 01:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0419 18:59:16.771889   14960 command_runner.go:130] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0419 18:59:16.771889   14960 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0419 18:59:16.771889   14960 command_runner.go:130] > [  +0.134823] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0419 18:59:16.771889   14960 command_runner.go:130] > [  +0.023006] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0419 18:59:16.771889   14960 command_runner.go:130] > [  +0.000006] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0419 18:59:16.771889   14960 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0419 18:59:16.771889   14960 command_runner.go:130] > [  +0.065433] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0419 18:59:16.771889   14960 command_runner.go:130] > [  +0.022829] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0419 18:59:16.771889   14960 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0419 18:59:16.771889   14960 command_runner.go:130] > [  +5.461945] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.733998] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +1.817887] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +7.031305] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0419 18:59:16.772867   14960 command_runner.go:130] > [Apr20 01:57] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.209815] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [ +26.622359] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.115734] kauditd_printk_skb: 73 callbacks suppressed
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.605928] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.209234] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.243987] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +2.954231] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.209781] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.225214] systemd-fstab-generator[1255]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.313735] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.929646] systemd-fstab-generator[1383]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +0.108494] kauditd_printk_skb: 205 callbacks suppressed
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +3.650728] systemd-fstab-generator[1520]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +1.371725] kauditd_printk_skb: 49 callbacks suppressed
	I0419 18:59:16.772867   14960 command_runner.go:130] > [Apr20 01:58] kauditd_printk_skb: 25 callbacks suppressed
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +3.878920] systemd-fstab-generator[2324]: Ignoring "noauto" option for root device
	I0419 18:59:16.772867   14960 command_runner.go:130] > [  +7.552702] kauditd_printk_skb: 70 callbacks suppressed
	I0419 18:59:16.773857   14960 logs.go:123] Gathering logs for kube-scheduler [d57aee391c14] ...
	I0419 18:59:16.773857   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57aee391c14"
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:57:58.020728       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.771749       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.771906       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.785599       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.785824       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.785929       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.785956       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.785972       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.786046       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.786323       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.786915       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.887091       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.887476       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:16.803896   14960 command_runner.go:130] ! I0420 01:58:00.888293       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0419 18:59:16.806462   14960 logs.go:123] Gathering logs for kube-controller-manager [b67f2295d26c] ...
	I0419 18:59:16.806462   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67f2295d26c"
	I0419 18:59:16.839657   14960 command_runner.go:130] ! I0420 01:57:58.124915       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:16.839780   14960 command_runner.go:130] ! I0420 01:57:58.572589       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0419 18:59:16.839780   14960 command_runner.go:130] ! I0420 01:57:58.572759       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:16.839780   14960 command_runner.go:130] ! I0420 01:57:58.576545       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:16.839780   14960 command_runner.go:130] ! I0420 01:57:58.576765       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:16.839850   14960 command_runner.go:130] ! I0420 01:57:58.577138       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0419 18:59:16.839850   14960 command_runner.go:130] ! I0420 01:57:58.577308       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:16.839850   14960 command_runner.go:130] ! I0420 01:58:02.671844       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0419 18:59:16.839850   14960 command_runner.go:130] ! I0420 01:58:02.672396       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0419 18:59:16.839850   14960 command_runner.go:130] ! I0420 01:58:02.683222       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0419 18:59:16.839931   14960 command_runner.go:130] ! I0420 01:58:02.683502       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0419 18:59:16.839931   14960 command_runner.go:130] ! I0420 01:58:02.683748       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0419 18:59:16.839931   14960 command_runner.go:130] ! I0420 01:58:02.684992       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0419 18:59:16.839931   14960 command_runner.go:130] ! I0420 01:58:02.685159       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0419 18:59:16.840000   14960 command_runner.go:130] ! I0420 01:58:02.689572       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0419 18:59:16.840000   14960 command_runner.go:130] ! I0420 01:58:02.693653       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0419 18:59:16.840000   14960 command_runner.go:130] ! I0420 01:58:02.694118       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0419 18:59:16.840000   14960 command_runner.go:130] ! I0420 01:58:02.694295       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0419 18:59:16.840000   14960 command_runner.go:130] ! I0420 01:58:02.695565       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0419 18:59:16.840072   14960 command_runner.go:130] ! I0420 01:58:02.695757       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0419 18:59:16.840072   14960 command_runner.go:130] ! I0420 01:58:02.700089       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0419 18:59:16.840106   14960 command_runner.go:130] ! I0420 01:58:02.700328       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0419 18:59:16.840106   14960 command_runner.go:130] ! I0420 01:58:02.700370       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0419 18:59:16.840153   14960 command_runner.go:130] ! I0420 01:58:02.708704       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0419 18:59:16.840153   14960 command_runner.go:130] ! I0420 01:58:02.712057       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0419 18:59:16.840153   14960 command_runner.go:130] ! I0420 01:58:02.712325       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0419 18:59:16.840202   14960 command_runner.go:130] ! I0420 01:58:02.712551       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0419 18:59:16.840202   14960 command_runner.go:130] ! E0420 01:58:02.728628       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.728672       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! E0420 01:58:02.742147       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.742194       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.742206       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.748098       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.748399       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.748420       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.752218       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.752332       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.752344       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.765569       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.765610       1 shared_informer.go:313] Waiting for caches to sync for job
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.765645       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.772658       1 shared_informer.go:320] Caches are synced for tokens
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.773270       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.773287       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.786700       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.788042       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.799412       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.804126       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.804238       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.814226       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.818062       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.818127       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.868296       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.868361       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.868379       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.870217       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.873404       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! I0420 01:58:02.873440       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0419 18:59:16.840269   14960 command_runner.go:130] ! W0420 01:58:02.873461       1 shared_informer.go:597] resyncPeriod 18h17m32.022460908s is smaller than resyncCheckPeriod 19h9m29.930546571s and the informer has already started. Changing it to 19h9m29.930546571s
	I0419 18:59:16.840807   14960 command_runner.go:130] ! I0420 01:58:02.873587       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0419 18:59:16.840807   14960 command_runner.go:130] ! I0420 01:58:02.873612       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0419 18:59:16.840807   14960 command_runner.go:130] ! I0420 01:58:02.873690       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0419 18:59:16.840889   14960 command_runner.go:130] ! I0420 01:58:02.873722       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0419 18:59:16.840933   14960 command_runner.go:130] ! I0420 01:58:02.873768       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0419 18:59:16.840933   14960 command_runner.go:130] ! I0420 01:58:02.873784       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0419 18:59:16.840933   14960 command_runner.go:130] ! I0420 01:58:02.873852       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0419 18:59:16.840988   14960 command_runner.go:130] ! I0420 01:58:02.873883       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0419 18:59:16.841022   14960 command_runner.go:130] ! I0420 01:58:02.873963       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0419 18:59:16.841022   14960 command_runner.go:130] ! I0420 01:58:02.873989       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0419 18:59:16.841071   14960 command_runner.go:130] ! I0420 01:58:02.874019       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0419 18:59:16.841071   14960 command_runner.go:130] ! I0420 01:58:02.874045       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0419 18:59:16.841139   14960 command_runner.go:130] ! I0420 01:58:02.874084       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0419 18:59:16.841139   14960 command_runner.go:130] ! I0420 01:58:02.874104       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0419 18:59:16.841139   14960 command_runner.go:130] ! I0420 01:58:02.874180       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0419 18:59:16.841209   14960 command_runner.go:130] ! I0420 01:58:02.874255       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0419 18:59:16.841261   14960 command_runner.go:130] ! I0420 01:58:02.874269       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:16.841261   14960 command_runner.go:130] ! I0420 01:58:02.874289       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0419 18:59:16.841857   14960 command_runner.go:130] ! I0420 01:58:02.910217       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0419 18:59:16.841857   14960 command_runner.go:130] ! I0420 01:58:02.910746       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0419 18:59:16.841857   14960 command_runner.go:130] ! I0420 01:58:02.912220       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0419 18:59:16.841857   14960 command_runner.go:130] ! I0420 01:58:02.928174       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0419 18:59:16.841857   14960 command_runner.go:130] ! I0420 01:58:02.928508       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0419 18:59:16.841857   14960 command_runner.go:130] ! I0420 01:58:02.928473       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0419 18:59:16.841857   14960 command_runner.go:130] ! I0420 01:58:02.929874       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0419 18:59:16.841857   14960 command_runner.go:130] ! I0420 01:58:02.931641       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0419 18:59:16.841857   14960 command_runner.go:130] ! I0420 01:58:02.931894       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0419 18:59:16.841857   14960 command_runner.go:130] ! I0420 01:58:02.932890       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0419 18:59:16.842387   14960 command_runner.go:130] ! I0420 01:58:02.934333       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0419 18:59:16.842387   14960 command_runner.go:130] ! I0420 01:58:02.934546       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0419 18:59:16.842387   14960 command_runner.go:130] ! I0420 01:58:02.934881       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0419 18:59:16.842387   14960 command_runner.go:130] ! I0420 01:58:02.939106       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0419 18:59:16.842387   14960 command_runner.go:130] ! I0420 01:58:02.939460       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0419 18:59:16.842387   14960 command_runner.go:130] ! I0420 01:58:12.968845       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0419 18:59:16.842496   14960 command_runner.go:130] ! I0420 01:58:12.968916       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0419 18:59:16.842496   14960 command_runner.go:130] ! I0420 01:58:12.969733       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0419 18:59:16.842496   14960 command_runner.go:130] ! I0420 01:58:12.969944       1 shared_informer.go:313] Waiting for caches to sync for node
	I0419 18:59:16.842551   14960 command_runner.go:130] ! I0420 01:58:12.975888       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0419 18:59:16.842551   14960 command_runner.go:130] ! I0420 01:58:12.977148       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0419 18:59:16.842551   14960 command_runner.go:130] ! I0420 01:58:12.977216       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0419 18:59:16.842616   14960 command_runner.go:130] ! I0420 01:58:12.978712       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0419 18:59:16.842642   14960 command_runner.go:130] ! I0420 01:58:12.979007       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:12.979040       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:12.982094       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:12.982639       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:12.982957       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.032307       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.032749       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.035306       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.036848       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.037653       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.038965       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.039366       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.039352       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.040679       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.040782       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.040908       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.041738       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.041781       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.042295       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.041839       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.042314       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.041850       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.042715       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.046953       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.047617       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0419 18:59:16.842672   14960 command_runner.go:130] ! I0420 01:58:13.047660       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0419 18:59:16.843210   14960 command_runner.go:130] ! I0420 01:58:13.047670       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0419 18:59:16.843210   14960 command_runner.go:130] ! I0420 01:58:13.050144       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0419 18:59:16.843210   14960 command_runner.go:130] ! I0420 01:58:13.050286       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0419 18:59:16.843357   14960 command_runner.go:130] ! I0420 01:58:13.050982       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0419 18:59:16.843357   14960 command_runner.go:130] ! I0420 01:58:13.051033       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0419 18:59:16.843666   14960 command_runner.go:130] ! I0420 01:58:13.051061       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0419 18:59:16.843698   14960 command_runner.go:130] ! I0420 01:58:13.054294       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0419 18:59:16.844099   14960 command_runner.go:130] ! I0420 01:58:13.054709       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0419 18:59:16.844794   14960 command_runner.go:130] ! I0420 01:58:13.054987       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0419 18:59:16.844833   14960 command_runner.go:130] ! I0420 01:58:13.057961       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0419 18:59:16.846005   14960 command_runner.go:130] ! I0420 01:58:13.058399       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0419 18:59:16.846005   14960 command_runner.go:130] ! I0420 01:58:13.058606       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0419 18:59:16.846539   14960 command_runner.go:130] ! I0420 01:58:13.060766       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:16.846539   14960 command_runner.go:130] ! I0420 01:58:13.061307       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0419 18:59:16.846579   14960 command_runner.go:130] ! I0420 01:58:13.060852       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0419 18:59:16.846579   14960 command_runner.go:130] ! I0420 01:58:13.061691       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0419 18:59:16.846579   14960 command_runner.go:130] ! I0420 01:58:13.064061       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0419 18:59:16.846664   14960 command_runner.go:130] ! I0420 01:58:13.064698       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.065134       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.067945       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.068315       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.068613       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.077312       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.077939       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.078050       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.078623       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.083275       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.083591       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.083702       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.090751       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.091149       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.091393       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.091591       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.096868       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.097085       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.100720       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.101287       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0419 18:59:16.846689   14960 command_runner.go:130] ! I0420 01:58:13.101375       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0419 18:59:16.847247   14960 command_runner.go:130] ! I0420 01:58:13.103459       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0419 18:59:16.847444   14960 command_runner.go:130] ! I0420 01:58:13.106949       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0419 18:59:16.847571   14960 command_runner.go:130] ! I0420 01:58:13.107026       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0419 18:59:16.847571   14960 command_runner.go:130] ! I0420 01:58:13.116002       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 18:59:16.847571   14960 command_runner.go:130] ! I0420 01:58:13.139685       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0419 18:59:16.847571   14960 command_runner.go:130] ! I0420 01:58:13.148344       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.848113   14960 command_runner.go:130] ! I0420 01:58:13.152489       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.848202   14960 command_runner.go:130] ! I0420 01:58:13.140934       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0419 18:59:16.848202   14960 command_runner.go:130] ! I0420 01:58:13.151083       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000\" does not exist"
	I0419 18:59:16.848202   14960 command_runner.go:130] ! I0420 01:58:13.141105       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.156086       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.156676       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m02\" does not exist"
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.156750       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m03\" does not exist"
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.156865       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.142425       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.157020       1 shared_informer.go:320] Caches are synced for expand
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.159992       1 shared_informer.go:320] Caches are synced for ephemeral
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.145957       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.162320       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.165325       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.165759       1 shared_informer.go:320] Caches are synced for job
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.169537       1 shared_informer.go:320] Caches are synced for service account
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.171293       1 shared_informer.go:320] Caches are synced for node
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.178178       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.178222       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.178230       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.178237       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.178270       1 shared_informer.go:320] Caches are synced for attach detach
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.179699       1 shared_informer.go:320] Caches are synced for PV protection
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.183856       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.183905       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.188521       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.195859       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.200417       1 shared_informer.go:320] Caches are synced for crt configmap
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.201881       1 shared_informer.go:320] Caches are synced for persistent volume
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.204647       1 shared_informer.go:320] Caches are synced for endpoint
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.207356       1 shared_informer.go:320] Caches are synced for PVC protection
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.213532       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.214173       1 shared_informer.go:320] Caches are synced for namespace
	I0419 18:59:16.848774   14960 command_runner.go:130] ! I0420 01:58:13.219105       1 shared_informer.go:320] Caches are synced for GC
	I0419 18:59:16.849313   14960 command_runner.go:130] ! I0420 01:58:13.228919       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.535929ms"
	I0419 18:59:16.849313   14960 command_runner.go:130] ! I0420 01:58:13.230155       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.901µs"
	I0419 18:59:16.849313   14960 command_runner.go:130] ! I0420 01:58:13.230170       1 shared_informer.go:320] Caches are synced for HPA
	I0419 18:59:16.849313   14960 command_runner.go:130] ! I0420 01:58:13.234086       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0419 18:59:16.849313   14960 command_runner.go:130] ! I0420 01:58:13.236046       1 shared_informer.go:320] Caches are synced for TTL
	I0419 18:59:16.849313   14960 command_runner.go:130] ! I0420 01:58:13.240266       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.682408ms"
	I0419 18:59:16.849431   14960 command_runner.go:130] ! I0420 01:58:13.240992       1 shared_informer.go:320] Caches are synced for deployment
	I0419 18:59:16.849431   14960 command_runner.go:130] ! I0420 01:58:13.243741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="114.104µs"
	I0419 18:59:16.849431   14960 command_runner.go:130] ! I0420 01:58:13.248776       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0419 18:59:16.849431   14960 command_runner.go:130] ! I0420 01:58:13.252859       1 shared_informer.go:320] Caches are synced for daemon sets
	I0419 18:59:16.849431   14960 command_runner.go:130] ! I0420 01:58:13.253008       1 shared_informer.go:320] Caches are synced for taint
	I0419 18:59:16.849503   14960 command_runner.go:130] ! I0420 01:58:13.259997       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0419 18:59:16.849503   14960 command_runner.go:130] ! I0420 01:58:13.297486       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000"
	I0419 18:59:16.849560   14960 command_runner.go:130] ! I0420 01:58:13.297542       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m02"
	I0419 18:59:16.849560   14960 command_runner.go:130] ! I0420 01:58:13.297627       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m03"
	I0419 18:59:16.849560   14960 command_runner.go:130] ! I0420 01:58:13.297865       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0419 18:59:16.849560   14960 command_runner.go:130] ! I0420 01:58:13.335459       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0419 18:59:16.849623   14960 command_runner.go:130] ! I0420 01:58:13.374436       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:16.849623   14960 command_runner.go:130] ! I0420 01:58:13.389294       1 shared_informer.go:320] Caches are synced for cronjob
	I0419 18:59:16.849623   14960 command_runner.go:130] ! I0420 01:58:13.392315       1 shared_informer.go:320] Caches are synced for disruption
	I0419 18:59:16.849623   14960 command_runner.go:130] ! I0420 01:58:13.397172       1 shared_informer.go:320] Caches are synced for stateful set
	I0419 18:59:16.849678   14960 command_runner.go:130] ! I0420 01:58:13.416186       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 18:59:16.849678   14960 command_runner.go:130] ! I0420 01:58:13.857437       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:16.849678   14960 command_runner.go:130] ! I0420 01:58:13.878325       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 18:59:16.849735   14960 command_runner.go:130] ! I0420 01:58:13.878534       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0419 18:59:16.849735   14960 command_runner.go:130] ! I0420 01:58:40.290168       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0419 18:59:16.849804   14960 command_runner.go:130] ! I0420 01:58:53.395955       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.694507ms"
	I0419 18:59:16.849804   14960 command_runner.go:130] ! I0420 01:58:53.396146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.003µs"
	I0419 18:59:16.849804   14960 command_runner.go:130] ! I0420 01:59:07.033370       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.713655ms"
	I0419 18:59:16.849881   14960 command_runner.go:130] ! I0420 01:59:07.033533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.092µs"
	I0419 18:59:16.849881   14960 command_runner.go:130] ! I0420 01:59:07.047220       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.391µs"
	I0419 18:59:16.849936   14960 command_runner.go:130] ! I0420 01:59:07.121391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.338984ms"
	I0419 18:59:16.849936   14960 command_runner.go:130] ! I0420 01:59:07.121503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.691µs"
	I0419 18:59:16.868154   14960 logs.go:123] Gathering logs for kube-proxy [a6586791413d] ...
	I0419 18:59:16.868154   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6586791413d"
	I0419 18:59:16.901180   14960 command_runner.go:130] ! I0420 01:35:26.120497       1 server_linux.go:69] "Using iptables proxy"
	I0419 18:59:16.902068   14960 command_runner.go:130] ! I0420 01:35:26.156956       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.42.231"]
	I0419 18:59:16.902148   14960 command_runner.go:130] ! I0420 01:35:26.208282       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 18:59:16.902148   14960 command_runner.go:130] ! I0420 01:35:26.208472       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 18:59:16.902148   14960 command_runner.go:130] ! I0420 01:35:26.208501       1 server_linux.go:165] "Using iptables Proxier"
	I0419 18:59:16.902148   14960 command_runner.go:130] ! I0420 01:35:26.214693       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 18:59:16.902148   14960 command_runner.go:130] ! I0420 01:35:26.216114       1 server.go:872] "Version info" version="v1.30.0"
	I0419 18:59:16.902148   14960 command_runner.go:130] ! I0420 01:35:26.216181       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:16.902148   14960 command_runner.go:130] ! I0420 01:35:26.219192       1 config.go:192] "Starting service config controller"
	I0419 18:59:16.902148   14960 command_runner.go:130] ! I0420 01:35:26.219810       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 18:59:16.902148   14960 command_runner.go:130] ! I0420 01:35:26.220079       1 config.go:101] "Starting endpoint slice config controller"
	I0419 18:59:16.902287   14960 command_runner.go:130] ! I0420 01:35:26.220093       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 18:59:16.902287   14960 command_runner.go:130] ! I0420 01:35:26.221802       1 config.go:319] "Starting node config controller"
	I0419 18:59:16.902287   14960 command_runner.go:130] ! I0420 01:35:26.221980       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 18:59:16.902287   14960 command_runner.go:130] ! I0420 01:35:26.320313       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 18:59:16.902355   14960 command_runner.go:130] ! I0420 01:35:26.320380       1 shared_informer.go:320] Caches are synced for service config
	I0419 18:59:16.902355   14960 command_runner.go:130] ! I0420 01:35:26.322323       1 shared_informer.go:320] Caches are synced for node config
	I0419 18:59:16.904086   14960 logs.go:123] Gathering logs for kindnet [f8c798c99407] ...
	I0419 18:59:16.904086   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8c798c99407"
	I0419 18:59:16.934121   14960 command_runner.go:130] ! I0420 01:58:03.441751       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0419 18:59:16.934398   14960 command_runner.go:130] ! I0420 01:58:03.511070       1 main.go:107] hostIP = 172.19.42.24
	I0419 18:59:16.934398   14960 command_runner.go:130] ! podIP = 172.19.42.24
	I0419 18:59:16.934463   14960 command_runner.go:130] ! I0420 01:58:03.513110       1 main.go:116] setting mtu 1500 for CNI 
	I0419 18:59:16.934463   14960 command_runner.go:130] ! I0420 01:58:03.513147       1 main.go:146] kindnetd IP family: "ipv4"
	I0419 18:59:16.934463   14960 command_runner.go:130] ! I0420 01:58:03.513182       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0419 18:59:16.934463   14960 command_runner.go:130] ! I0420 01:58:07.011650       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:16.934463   14960 command_runner.go:130] ! I0420 01:58:10.084231       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:16.934534   14960 command_runner.go:130] ! I0420 01:58:13.156371       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:16.934534   14960 command_runner.go:130] ! I0420 01:58:16.227521       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:16.934602   14960 command_runner.go:130] ! I0420 01:58:19.299385       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:16.934602   14960 command_runner.go:130] ! panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0419 18:59:16.934602   14960 command_runner.go:130] ! goroutine 1 [running]:
	I0419 18:59:16.934665   14960 command_runner.go:130] ! main.main()
	I0419 18:59:16.934665   14960 command_runner.go:130] ! 	/go/src/cmd/kindnetd/main.go:195 +0xd3d
	I0419 18:59:16.935608   14960 logs.go:123] Gathering logs for container status ...
	I0419 18:59:16.935608   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 18:59:17.009273   14960 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0419 18:59:17.009273   14960 command_runner.go:130] > d608b74b0597f       8c811b4aec35f                                                                                         12 seconds ago       Running             busybox                   1                   75ff9f4e9dde2       busybox-fc5497c4f-xnz2k
	I0419 18:59:17.009406   14960 command_runner.go:130] > 352cf21a3e202       cbb01a7bd410d                                                                                         12 seconds ago       Running             coredns                   1                   f28a1e746a9b4       coredns-7db6d8ff4d-7w477
	I0419 18:59:17.009406   14960 command_runner.go:130] > c6f350bee7762       6e38f40d628db                                                                                         32 seconds ago       Running             storage-provisioner       2                   5472c1fba3929       storage-provisioner
	I0419 18:59:17.009406   14960 command_runner.go:130] > ae0b21715f861       4950bb10b3f87                                                                                         41 seconds ago       Running             kindnet-cni               2                   b5a777eba295e       kindnet-s4fsr
	I0419 18:59:17.009406   14960 command_runner.go:130] > f8c798c994078       4950bb10b3f87                                                                                         About a minute ago   Exited              kindnet-cni               1                   b5a777eba295e       kindnet-s4fsr
	I0419 18:59:17.009507   14960 command_runner.go:130] > 45383c4290ad1       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   5472c1fba3929       storage-provisioner
	I0419 18:59:17.009507   14960 command_runner.go:130] > e438af0f1ec9e       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   09f65a6953038       kube-proxy-kj76x
	I0419 18:59:17.009507   14960 command_runner.go:130] > 2deabe4dbdf41       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   ab9ff1d906880       etcd-multinode-348000
	I0419 18:59:17.009566   14960 command_runner.go:130] > bd3aa93bac25b       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   d7052a6f04def       kube-apiserver-multinode-348000
	I0419 18:59:17.009632   14960 command_runner.go:130] > b67f2295d26ca       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   118cca57d1f54       kube-controller-manager-multinode-348000
	I0419 18:59:17.009632   14960 command_runner.go:130] > d57aee391c146       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   e8baa597c1467       kube-scheduler-multinode-348000
	I0419 18:59:17.009694   14960 command_runner.go:130] > d8afb3e1fb946       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   476e3efb38684       busybox-fc5497c4f-xnz2k
	I0419 18:59:17.009694   14960 command_runner.go:130] > 627b84abf45cd       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   2dd294415aae1       coredns-7db6d8ff4d-7w477
	I0419 18:59:17.009765   14960 command_runner.go:130] > a6586791413d0       a0bf559e280cf                                                                                         23 minutes ago       Exited              kube-proxy                0                   7935893e9f22a       kube-proxy-kj76x
	I0419 18:59:17.009765   14960 command_runner.go:130] > 9638ddcd54285       c7aad43836fa5                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   6e420625b84be       kube-controller-manager-multinode-348000
	I0419 18:59:17.009841   14960 command_runner.go:130] > e476774b8f77e       259c8277fcbbc                                                                                         24 minutes ago       Exited              kube-scheduler            0                   e5d733991bf1a       kube-scheduler-multinode-348000
	I0419 18:59:17.012000   14960 logs.go:123] Gathering logs for describe nodes ...
	I0419 18:59:17.012000   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 18:59:17.214899   14960 command_runner.go:130] > Name:               multinode-348000
	I0419 18:59:17.215860   14960 command_runner.go:130] > Roles:              control-plane
	I0419 18:59:17.215895   14960 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0419 18:59:17.215895   14960 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0419 18:59:17.215895   14960 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0419 18:59:17.215895   14960 command_runner.go:130] >                     kubernetes.io/hostname=multinode-348000
	I0419 18:59:17.215895   14960 command_runner.go:130] >                     kubernetes.io/os=linux
	I0419 18:59:17.215895   14960 command_runner.go:130] >                     minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	I0419 18:59:17.215895   14960 command_runner.go:130] >                     minikube.k8s.io/name=multinode-348000
	I0419 18:59:17.215895   14960 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0419 18:59:17.215895   14960 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_04_19T18_35_09_0700
	I0419 18:59:17.215895   14960 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0419 18:59:17.215895   14960 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0419 18:59:17.216024   14960 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0419 18:59:17.216024   14960 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0419 18:59:17.216024   14960 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0419 18:59:17.216024   14960 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0419 18:59:17.216024   14960 command_runner.go:130] > CreationTimestamp:  Sat, 20 Apr 2024 01:35:05 +0000
	I0419 18:59:17.216084   14960 command_runner.go:130] > Taints:             <none>
	I0419 18:59:17.216084   14960 command_runner.go:130] > Unschedulable:      false
	I0419 18:59:17.216084   14960 command_runner.go:130] > Lease:
	I0419 18:59:17.216084   14960 command_runner.go:130] >   HolderIdentity:  multinode-348000
	I0419 18:59:17.216084   14960 command_runner.go:130] >   AcquireTime:     <unset>
	I0419 18:59:17.216127   14960 command_runner.go:130] >   RenewTime:       Sat, 20 Apr 2024 01:59:11 +0000
	I0419 18:59:17.216127   14960 command_runner.go:130] > Conditions:
	I0419 18:59:17.216127   14960 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0419 18:59:17.216127   14960 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0419 18:59:17.216127   14960 command_runner.go:130] >   MemoryPressure   False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0419 18:59:17.216127   14960 command_runner.go:130] >   DiskPressure     False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0419 18:59:17.216127   14960 command_runner.go:130] >   PIDPressure      False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0419 18:59:17.216235   14960 command_runner.go:130] >   Ready            True    Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:58:40 +0000   KubeletReady                 kubelet is posting ready status
	I0419 18:59:17.216235   14960 command_runner.go:130] > Addresses:
	I0419 18:59:17.216235   14960 command_runner.go:130] >   InternalIP:  172.19.42.24
	I0419 18:59:17.216235   14960 command_runner.go:130] >   Hostname:    multinode-348000
	I0419 18:59:17.216235   14960 command_runner.go:130] > Capacity:
	I0419 18:59:17.216235   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:17.216235   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:17.216235   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:17.216352   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:17.216352   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:17.216352   14960 command_runner.go:130] > Allocatable:
	I0419 18:59:17.216352   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:17.216352   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:17.216352   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:17.216352   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:17.216352   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:17.216352   14960 command_runner.go:130] > System Info:
	I0419 18:59:17.216352   14960 command_runner.go:130] >   Machine ID:                 bd21fc8af31a4161a4396c16b70a2fc3
	I0419 18:59:17.216352   14960 command_runner.go:130] >   System UUID:                fdc3fb6e-1818-9a4e-b496-b7ed0124a8e6
	I0419 18:59:17.216352   14960 command_runner.go:130] >   Boot ID:                    047b982b-9f97-4a1a-8f8a-a308f369753b
	I0419 18:59:17.216352   14960 command_runner.go:130] >   Kernel Version:             5.10.207
	I0419 18:59:17.216468   14960 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0419 18:59:17.216468   14960 command_runner.go:130] >   Operating System:           linux
	I0419 18:59:17.216468   14960 command_runner.go:130] >   Architecture:               amd64
	I0419 18:59:17.216468   14960 command_runner.go:130] >   Container Runtime Version:  docker://26.0.1
	I0419 18:59:17.216468   14960 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0419 18:59:17.216468   14960 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0419 18:59:17.216468   14960 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0419 18:59:17.216468   14960 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0419 18:59:17.216468   14960 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0419 18:59:17.216468   14960 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0419 18:59:17.216468   14960 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0419 18:59:17.216468   14960 command_runner.go:130] >   default                     busybox-fc5497c4f-xnz2k                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0419 18:59:17.216593   14960 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-7w477                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	I0419 18:59:17.216593   14960 command_runner.go:130] >   kube-system                 etcd-multinode-348000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         76s
	I0419 18:59:17.216593   14960 command_runner.go:130] >   kube-system                 kindnet-s4fsr                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	I0419 18:59:17.216593   14960 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-348000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	I0419 18:59:17.216593   14960 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-348000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0419 18:59:17.216593   14960 command_runner.go:130] >   kube-system                 kube-proxy-kj76x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0419 18:59:17.216593   14960 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-348000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0419 18:59:17.216720   14960 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0419 18:59:17.216720   14960 command_runner.go:130] > Allocated resources:
	I0419 18:59:17.216720   14960 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0419 18:59:17.216720   14960 command_runner.go:130] >   Resource           Requests     Limits
	I0419 18:59:17.216720   14960 command_runner.go:130] >   --------           --------     ------
	I0419 18:59:17.216720   14960 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0419 18:59:17.216720   14960 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0419 18:59:17.216720   14960 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0419 18:59:17.216720   14960 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0419 18:59:17.216720   14960 command_runner.go:130] > Events:
	I0419 18:59:17.216720   14960 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0419 18:59:17.216720   14960 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0419 18:59:17.216720   14960 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0419 18:59:17.216840   14960 command_runner.go:130] >   Normal  Starting                 73s                kube-proxy       
	I0419 18:59:17.216840   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-348000 status is now: NodeHasSufficientPID
	I0419 18:59:17.216840   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:17.216840   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-348000 status is now: NodeHasSufficientMemory
	I0419 18:59:17.216840   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-348000 status is now: NodeHasNoDiskPressure
	I0419 18:59:17.216840   14960 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0419 18:59:17.216945   14960 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-348000 event: Registered Node multinode-348000 in Controller
	I0419 18:59:17.216945   14960 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-348000 status is now: NodeReady
	I0419 18:59:17.216945   14960 command_runner.go:130] >   Normal  Starting                 82s                kubelet          Starting kubelet.
	I0419 18:59:17.216945   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  82s (x8 over 82s)  kubelet          Node multinode-348000 status is now: NodeHasSufficientMemory
	I0419 18:59:17.216945   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    82s (x8 over 82s)  kubelet          Node multinode-348000 status is now: NodeHasNoDiskPressure
	I0419 18:59:17.217054   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     82s (x7 over 82s)  kubelet          Node multinode-348000 status is now: NodeHasSufficientPID
	I0419 18:59:17.217054   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:17.217054   14960 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-348000 event: Registered Node multinode-348000 in Controller
	I0419 18:59:17.245375   14960 command_runner.go:130] > Name:               multinode-348000-m02
	I0419 18:59:17.245375   14960 command_runner.go:130] > Roles:              <none>
	I0419 18:59:17.245375   14960 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     kubernetes.io/hostname=multinode-348000-m02
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     kubernetes.io/os=linux
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     minikube.k8s.io/name=multinode-348000
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_04_19T18_38_19_0700
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0419 18:59:17.245375   14960 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0419 18:59:17.245375   14960 command_runner.go:130] > CreationTimestamp:  Sat, 20 Apr 2024 01:38:18 +0000
	I0419 18:59:17.245375   14960 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0419 18:59:17.245375   14960 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0419 18:59:17.245375   14960 command_runner.go:130] > Unschedulable:      false
	I0419 18:59:17.245375   14960 command_runner.go:130] > Lease:
	I0419 18:59:17.245375   14960 command_runner.go:130] >   HolderIdentity:  multinode-348000-m02
	I0419 18:59:17.245375   14960 command_runner.go:130] >   AcquireTime:     <unset>
	I0419 18:59:17.245375   14960 command_runner.go:130] >   RenewTime:       Sat, 20 Apr 2024 01:54:49 +0000
	I0419 18:59:17.245375   14960 command_runner.go:130] > Conditions:
	I0419 18:59:17.245375   14960 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0419 18:59:17.245375   14960 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0419 18:59:17.245375   14960 command_runner.go:130] >   MemoryPressure   Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:17.245375   14960 command_runner.go:130] >   DiskPressure     Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:17.245375   14960 command_runner.go:130] >   PIDPressure      Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:17.245375   14960 command_runner.go:130] >   Ready            Unknown   Sat, 20 Apr 2024 01:54:38 +0000   Sat, 20 Apr 2024 01:58:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:17.245375   14960 command_runner.go:130] > Addresses:
	I0419 18:59:17.245945   14960 command_runner.go:130] >   InternalIP:  172.19.32.249
	I0419 18:59:17.246090   14960 command_runner.go:130] >   Hostname:    multinode-348000-m02
	I0419 18:59:17.246090   14960 command_runner.go:130] > Capacity:
	I0419 18:59:17.246156   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:17.246156   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:17.246156   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:17.246156   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:17.246156   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:17.246156   14960 command_runner.go:130] > Allocatable:
	I0419 18:59:17.246156   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:17.246156   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:17.246156   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:17.246156   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:17.246156   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:17.246156   14960 command_runner.go:130] > System Info:
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Machine ID:                 ea453a3100b34d789441206109708446
	I0419 18:59:17.246156   14960 command_runner.go:130] >   System UUID:                9f7972f9-8942-ef4f-b0cf-029b405f5832
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Boot ID:                    d8ef37df-1396-47c1-8bea-04667e5bc60b
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Kernel Version:             5.10.207
	I0419 18:59:17.246156   14960 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Operating System:           linux
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Architecture:               amd64
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Container Runtime Version:  docker://26.0.1
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0419 18:59:17.246156   14960 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0419 18:59:17.246156   14960 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0419 18:59:17.246156   14960 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0419 18:59:17.246156   14960 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0419 18:59:17.246156   14960 command_runner.go:130] >   default                     busybox-fc5497c4f-2d5hs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0419 18:59:17.246156   14960 command_runner.go:130] >   kube-system                 kindnet-s98rh              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	I0419 18:59:17.246156   14960 command_runner.go:130] >   kube-system                 kube-proxy-bjv9b           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0419 18:59:17.246156   14960 command_runner.go:130] > Allocated resources:
	I0419 18:59:17.246156   14960 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Resource           Requests   Limits
	I0419 18:59:17.246156   14960 command_runner.go:130] >   --------           --------   ------
	I0419 18:59:17.246156   14960 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0419 18:59:17.246156   14960 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0419 18:59:17.246156   14960 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0419 18:59:17.246156   14960 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0419 18:59:17.246156   14960 command_runner.go:130] > Events:
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0419 18:59:17.246156   14960 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-348000-m02 status is now: NodeHasSufficientMemory
	I0419 18:59:17.246156   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-348000-m02 status is now: NodeHasNoDiskPressure
	I0419 18:59:17.246696   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-348000-m02 status is now: NodeHasSufficientPID
	I0419 18:59:17.246696   14960 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-348000-m02 event: Registered Node multinode-348000-m02 in Controller
	I0419 18:59:17.246744   14960 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-348000-m02 status is now: NodeReady
	I0419 18:59:17.246744   14960 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-348000-m02 event: Registered Node multinode-348000-m02 in Controller
	I0419 18:59:17.246744   14960 command_runner.go:130] >   Normal  NodeNotReady             24s                node-controller  Node multinode-348000-m02 status is now: NodeNotReady
	I0419 18:59:17.279540   14960 command_runner.go:130] > Name:               multinode-348000-m03
	I0419 18:59:17.280291   14960 command_runner.go:130] > Roles:              <none>
	I0419 18:59:17.280344   14960 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0419 18:59:17.280344   14960 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0419 18:59:17.280344   14960 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0419 18:59:17.280382   14960 command_runner.go:130] >                     kubernetes.io/hostname=multinode-348000-m03
	I0419 18:59:17.280399   14960 command_runner.go:130] >                     kubernetes.io/os=linux
	I0419 18:59:17.280399   14960 command_runner.go:130] >                     minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	I0419 18:59:17.280399   14960 command_runner.go:130] >                     minikube.k8s.io/name=multinode-348000
	I0419 18:59:17.280399   14960 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0419 18:59:17.280399   14960 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_04_19T18_53_29_0700
	I0419 18:59:17.280399   14960 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0419 18:59:17.280493   14960 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0419 18:59:17.280493   14960 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0419 18:59:17.280538   14960 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0419 18:59:17.280538   14960 command_runner.go:130] > CreationTimestamp:  Sat, 20 Apr 2024 01:53:28 +0000
	I0419 18:59:17.280577   14960 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0419 18:59:17.280577   14960 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0419 18:59:17.280577   14960 command_runner.go:130] > Unschedulable:      false
	I0419 18:59:17.280577   14960 command_runner.go:130] > Lease:
	I0419 18:59:17.280629   14960 command_runner.go:130] >   HolderIdentity:  multinode-348000-m03
	I0419 18:59:17.280629   14960 command_runner.go:130] >   AcquireTime:     <unset>
	I0419 18:59:17.280629   14960 command_runner.go:130] >   RenewTime:       Sat, 20 Apr 2024 01:54:29 +0000
	I0419 18:59:17.280666   14960 command_runner.go:130] > Conditions:
	I0419 18:59:17.280666   14960 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0419 18:59:17.280666   14960 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0419 18:59:17.280712   14960 command_runner.go:130] >   MemoryPressure   Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:17.280731   14960 command_runner.go:130] >   DiskPressure     Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:17.280768   14960 command_runner.go:130] >   PIDPressure      Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:17.280768   14960 command_runner.go:130] >   Ready            Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0419 18:59:17.280813   14960 command_runner.go:130] > Addresses:
	I0419 18:59:17.280830   14960 command_runner.go:130] >   InternalIP:  172.19.37.59
	I0419 18:59:17.280830   14960 command_runner.go:130] >   Hostname:    multinode-348000-m03
	I0419 18:59:17.280830   14960 command_runner.go:130] > Capacity:
	I0419 18:59:17.280865   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:17.280865   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:17.280865   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:17.280865   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:17.280865   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:17.280910   14960 command_runner.go:130] > Allocatable:
	I0419 18:59:17.280928   14960 command_runner.go:130] >   cpu:                2
	I0419 18:59:17.280928   14960 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0419 18:59:17.280928   14960 command_runner.go:130] >   hugepages-2Mi:      0
	I0419 18:59:17.280928   14960 command_runner.go:130] >   memory:             2164264Ki
	I0419 18:59:17.280928   14960 command_runner.go:130] >   pods:               110
	I0419 18:59:17.280928   14960 command_runner.go:130] > System Info:
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Machine ID:                 02e45e9bf03f4852a443a43ac6a8538b
	I0419 18:59:17.280928   14960 command_runner.go:130] >   System UUID:                37a43d59-2157-6e44-8d13-6c975ea12fea
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Boot ID:                    404bc64b-d4fc-4c63-a589-8191649bdfaa
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Kernel Version:             5.10.207
	I0419 18:59:17.280928   14960 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Operating System:           linux
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Architecture:               amd64
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Container Runtime Version:  docker://26.0.1
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0419 18:59:17.280928   14960 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0419 18:59:17.280928   14960 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0419 18:59:17.280928   14960 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0419 18:59:17.280928   14960 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0419 18:59:17.280928   14960 command_runner.go:130] >   kube-system                 kindnet-mg8qs       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	I0419 18:59:17.280928   14960 command_runner.go:130] >   kube-system                 kube-proxy-2jjsq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	I0419 18:59:17.280928   14960 command_runner.go:130] > Allocated resources:
	I0419 18:59:17.280928   14960 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Resource           Requests   Limits
	I0419 18:59:17.280928   14960 command_runner.go:130] >   --------           --------   ------
	I0419 18:59:17.280928   14960 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0419 18:59:17.280928   14960 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0419 18:59:17.280928   14960 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0419 18:59:17.280928   14960 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0419 18:59:17.280928   14960 command_runner.go:130] > Events:
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0419 18:59:17.280928   14960 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Normal  Starting                 5m45s                  kube-proxy       
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientMemory
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-348000-m03 status is now: NodeHasNoDiskPressure
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientPID
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-348000-m03 status is now: NodeReady
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Normal  Starting                 5m49s                  kubelet          Starting kubelet.
	I0419 18:59:17.280928   14960 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m49s (x2 over 5m49s)  kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientMemory
	I0419 18:59:17.281468   14960 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m49s (x2 over 5m49s)  kubelet          Node multinode-348000-m03 status is now: NodeHasNoDiskPressure
	I0419 18:59:17.281468   14960 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m49s (x2 over 5m49s)  kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientPID
	I0419 18:59:17.281468   14960 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m49s                  kubelet          Updated Node Allocatable limit across pods
	I0419 18:59:17.281468   14960 command_runner.go:130] >   Normal  RegisteredNode           5m45s                  node-controller  Node multinode-348000-m03 event: Registered Node multinode-348000-m03 in Controller
	I0419 18:59:17.281555   14960 command_runner.go:130] >   Normal  NodeReady                5m41s                  kubelet          Node multinode-348000-m03 status is now: NodeReady
	I0419 18:59:17.281555   14960 command_runner.go:130] >   Normal  NodeNotReady             4m4s                   node-controller  Node multinode-348000-m03 status is now: NodeNotReady
	I0419 18:59:17.281555   14960 command_runner.go:130] >   Normal  RegisteredNode           64s                    node-controller  Node multinode-348000-m03 event: Registered Node multinode-348000-m03 in Controller
	I0419 18:59:17.298002   14960 logs.go:123] Gathering logs for kube-apiserver [bd3aa93bac25] ...
	I0419 18:59:17.298078   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd3aa93bac25"
	I0419 18:59:17.332260   14960 command_runner.go:130] ! I0420 01:57:57.501840       1 options.go:221] external host was not specified, using 172.19.42.24
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:57.505380       1 server.go:148] Version: v1.30.0
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:57.505690       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:58.138487       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:58.138530       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:58.138987       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:58.139098       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:58.139890       1 instance.go:299] Using reconciler: lease
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.078678       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.078889       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.354874       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.355339       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.630985       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.818361       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.834974       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.835019       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.835028       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.835870       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.835981       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.837241       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.838781       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.838919       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.838930       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.841133       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.841240       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.842492       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.842627       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.842640       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! I0420 01:57:59.843439       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.843519       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.332344   14960 command_runner.go:130] ! W0420 01:57:59.843649       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.332898   14960 command_runner.go:130] ! I0420 01:57:59.844516       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0419 18:59:17.332898   14960 command_runner.go:130] ! I0420 01:57:59.847031       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0419 18:59:17.332898   14960 command_runner.go:130] ! W0420 01:57:59.847132       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.332971   14960 command_runner.go:130] ! W0420 01:57:59.847143       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:17.332971   14960 command_runner.go:130] ! I0420 01:57:59.847848       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0419 18:59:17.332971   14960 command_runner.go:130] ! W0420 01:57:59.847881       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.332971   14960 command_runner.go:130] ! W0420 01:57:59.847889       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:17.333051   14960 command_runner.go:130] ! I0420 01:57:59.849069       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0419 18:59:17.333051   14960 command_runner.go:130] ! W0420 01:57:59.849173       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0419 18:59:17.333156   14960 command_runner.go:130] ! I0420 01:57:59.851437       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0419 18:59:17.333156   14960 command_runner.go:130] ! W0420 01:57:59.851563       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.333156   14960 command_runner.go:130] ! W0420 01:57:59.851574       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:17.333242   14960 command_runner.go:130] ! I0420 01:57:59.852258       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0419 18:59:17.333242   14960 command_runner.go:130] ! W0420 01:57:59.852357       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.333242   14960 command_runner.go:130] ! W0420 01:57:59.852367       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:17.333314   14960 command_runner.go:130] ! I0420 01:57:59.855318       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0419 18:59:17.333314   14960 command_runner.go:130] ! W0420 01:57:59.855413       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.333314   14960 command_runner.go:130] ! W0420 01:57:59.855499       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:17.333314   14960 command_runner.go:130] ! I0420 01:57:59.857232       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0419 18:59:17.333379   14960 command_runner.go:130] ! I0420 01:57:59.859073       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0419 18:59:17.333379   14960 command_runner.go:130] ! W0420 01:57:59.859177       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0419 18:59:17.333379   14960 command_runner.go:130] ! W0420 01:57:59.859187       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.333379   14960 command_runner.go:130] ! I0420 01:57:59.866540       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0419 18:59:17.333379   14960 command_runner.go:130] ! W0420 01:57:59.866633       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0419 18:59:17.333379   14960 command_runner.go:130] ! W0420 01:57:59.866643       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0419 18:59:17.333499   14960 command_runner.go:130] ! I0420 01:57:59.873672       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0419 18:59:17.333537   14960 command_runner.go:130] ! W0420 01:57:59.873814       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.333537   14960 command_runner.go:130] ! W0420 01:57:59.873827       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0419 18:59:17.333537   14960 command_runner.go:130] ! I0420 01:57:59.875959       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0419 18:59:17.333581   14960 command_runner.go:130] ! W0420 01:57:59.875999       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.333581   14960 command_runner.go:130] ! I0420 01:57:59.909243       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0419 18:59:17.333581   14960 command_runner.go:130] ! W0420 01:57:59.909284       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0419 18:59:17.333581   14960 command_runner.go:130] ! I0420 01:58:00.597195       1 secure_serving.go:213] Serving securely on [::]:8443
	I0419 18:59:17.333639   14960 command_runner.go:130] ! I0420 01:58:00.597666       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:17.333639   14960 command_runner.go:130] ! I0420 01:58:00.598134       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:17.333639   14960 command_runner.go:130] ! I0420 01:58:00.597703       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0419 18:59:17.333639   14960 command_runner.go:130] ! I0420 01:58:00.597737       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:17.333730   14960 command_runner.go:130] ! I0420 01:58:00.600064       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0419 18:59:17.333730   14960 command_runner.go:130] ! I0420 01:58:00.600948       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0419 18:59:17.333758   14960 command_runner.go:130] ! I0420 01:58:00.601165       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0419 18:59:17.333758   14960 command_runner.go:130] ! I0420 01:58:00.601445       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0419 18:59:17.333795   14960 command_runner.go:130] ! I0420 01:58:00.602539       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0419 18:59:17.333795   14960 command_runner.go:130] ! I0420 01:58:00.602852       1 aggregator.go:163] waiting for initial CRD sync...
	I0419 18:59:17.333795   14960 command_runner.go:130] ! I0420 01:58:00.603187       1 controller.go:78] Starting OpenAPI AggregationController
	I0419 18:59:17.333795   14960 command_runner.go:130] ! I0420 01:58:00.604023       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0419 18:59:17.333851   14960 command_runner.go:130] ! I0420 01:58:00.604384       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0419 18:59:17.333851   14960 command_runner.go:130] ! I0420 01:58:00.606631       1 available_controller.go:423] Starting AvailableConditionController
	I0419 18:59:17.333851   14960 command_runner.go:130] ! I0420 01:58:00.606857       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0419 18:59:17.333851   14960 command_runner.go:130] ! I0420 01:58:00.607138       1 controller.go:116] Starting legacy_token_tracking_controller
	I0419 18:59:17.333944   14960 command_runner.go:130] ! I0420 01:58:00.607178       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0419 18:59:17.333944   14960 command_runner.go:130] ! I0420 01:58:00.607325       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0419 18:59:17.333976   14960 command_runner.go:130] ! I0420 01:58:00.607349       1 controller.go:139] Starting OpenAPI controller
	I0419 18:59:17.333976   14960 command_runner.go:130] ! I0420 01:58:00.607381       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0419 18:59:17.334001   14960 command_runner.go:130] ! I0420 01:58:00.607407       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0419 18:59:17.334001   14960 command_runner.go:130] ! I0420 01:58:00.607409       1 naming_controller.go:291] Starting NamingConditionController
	I0419 18:59:17.334001   14960 command_runner.go:130] ! I0420 01:58:00.607487       1 establishing_controller.go:76] Starting EstablishingController
	I0419 18:59:17.334049   14960 command_runner.go:130] ! I0420 01:58:00.607512       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0419 18:59:17.334049   14960 command_runner.go:130] ! I0420 01:58:00.607530       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0419 18:59:17.334049   14960 command_runner.go:130] ! I0420 01:58:00.607546       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0419 18:59:17.334049   14960 command_runner.go:130] ! I0420 01:58:00.608170       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0419 18:59:17.334105   14960 command_runner.go:130] ! I0420 01:58:00.608198       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0419 18:59:17.334105   14960 command_runner.go:130] ! I0420 01:58:00.608328       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:17.334105   14960 command_runner.go:130] ! I0420 01:58:00.608421       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 18:59:17.334105   14960 command_runner.go:130] ! I0420 01:58:00.607383       1 controller.go:87] Starting OpenAPI V3 controller
	I0419 18:59:17.334105   14960 command_runner.go:130] ! I0420 01:58:00.709605       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0419 18:59:17.334197   14960 command_runner.go:130] ! I0420 01:58:00.736531       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0419 18:59:17.334197   14960 command_runner.go:130] ! I0420 01:58:00.737086       1 shared_informer.go:320] Caches are synced for configmaps
	I0419 18:59:17.334197   14960 command_runner.go:130] ! I0420 01:58:00.737192       1 aggregator.go:165] initial CRD sync complete...
	I0419 18:59:17.334241   14960 command_runner.go:130] ! I0420 01:58:00.737219       1 autoregister_controller.go:141] Starting autoregister controller
	I0419 18:59:17.334241   14960 command_runner.go:130] ! I0420 01:58:00.737225       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0419 18:59:17.334241   14960 command_runner.go:130] ! I0420 01:58:00.737230       1 cache.go:39] Caches are synced for autoregister controller
	I0419 18:59:17.334336   14960 command_runner.go:130] ! I0420 01:58:00.740699       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 18:59:17.334364   14960 command_runner.go:130] ! I0420 01:58:00.741004       1 policy_source.go:224] refreshing policies
	I0419 18:59:17.334364   14960 command_runner.go:130] ! I0420 01:58:00.742672       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0419 18:59:17.334364   14960 command_runner.go:130] ! I0420 01:58:00.747054       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0419 18:59:17.334418   14960 command_runner.go:130] ! I0420 01:58:00.805770       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0419 18:59:17.334418   14960 command_runner.go:130] ! I0420 01:58:00.807460       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0419 18:59:17.334441   14960 command_runner.go:130] ! I0420 01:58:00.814456       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0419 18:59:17.334441   14960 command_runner.go:130] ! I0420 01:58:00.814490       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0419 18:59:17.334485   14960 command_runner.go:130] ! I0420 01:58:00.815844       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0419 18:59:17.334485   14960 command_runner.go:130] ! I0420 01:58:01.612010       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0419 18:59:17.334485   14960 command_runner.go:130] ! W0420 01:58:02.160618       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.42.231 172.19.42.24]
	I0419 18:59:17.334543   14960 command_runner.go:130] ! I0420 01:58:02.163332       1 controller.go:615] quota admission added evaluator for: endpoints
	I0419 18:59:17.334567   14960 command_runner.go:130] ! I0420 01:58:02.176968       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0419 18:59:17.334567   14960 command_runner.go:130] ! I0420 01:58:03.430204       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0419 18:59:17.334600   14960 command_runner.go:130] ! I0420 01:58:03.761410       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0419 18:59:17.334600   14960 command_runner.go:130] ! I0420 01:58:03.780335       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0419 18:59:17.334600   14960 command_runner.go:130] ! I0420 01:58:03.907022       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0419 18:59:17.334600   14960 command_runner.go:130] ! I0420 01:58:03.924019       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0419 18:59:17.334600   14960 command_runner.go:130] ! W0420 01:58:22.143512       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.42.24]
	I0419 18:59:17.343646   14960 logs.go:123] Gathering logs for kube-proxy [e438af0f1ec9] ...
	I0419 18:59:17.343646   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e438af0f1ec9"
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.129201       1 server_linux.go:69] "Using iptables proxy"
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.201631       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.42.24"]
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.344058       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.344107       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.344137       1 server_linux.go:165] "Using iptables Proxier"
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.353394       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.354462       1 server.go:872] "Version info" version="v1.30.0"
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.354693       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.358325       1 config.go:192] "Starting service config controller"
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.358366       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.358985       1 config.go:101] "Starting endpoint slice config controller"
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.359176       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.358997       1 config.go:319] "Starting node config controller"
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.368409       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.459372       1 shared_informer.go:320] Caches are synced for service config
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.459745       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 18:59:17.376714   14960 command_runner.go:130] ! I0420 01:58:03.470538       1 shared_informer.go:320] Caches are synced for node config
	I0419 18:59:17.378001   14960 logs.go:123] Gathering logs for coredns [627b84abf45c] ...
	I0419 18:59:17.378001   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627b84abf45c"
	I0419 18:59:17.406877   14960 command_runner.go:130] > .:53
	I0419 18:59:17.407915   14960 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93714cfd58e203ac2baa48ea9c7b435951d2a9faed7a5c70b4e84c89c6c1fe4c1dfa41f14b3ebf0f5941dade673a82eaad960061e673dd78dcb856db3393b39d
	I0419 18:59:17.407915   14960 command_runner.go:130] > CoreDNS-1.11.1
	I0419 18:59:17.407915   14960 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0419 18:59:17.407915   14960 command_runner.go:130] > [INFO] 127.0.0.1:37904 - 37003 "HINFO IN 1336380353163369387.5260466772500757990. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.053891439s
	I0419 18:59:17.407915   14960 command_runner.go:130] > [INFO] 10.244.1.2:47846 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002913s
	I0419 18:59:17.408049   14960 command_runner.go:130] > [INFO] 10.244.1.2:60728 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.118385602s
	I0419 18:59:17.408049   14960 command_runner.go:130] > [INFO] 10.244.1.2:48827 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.043741711s
	I0419 18:59:17.408049   14960 command_runner.go:130] > [INFO] 10.244.1.2:57126 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.111854404s
	I0419 18:59:17.408049   14960 command_runner.go:130] > [INFO] 10.244.0.3:44468 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001971s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:58477 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.002287005s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:39825 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000198301s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:54956 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000604s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:48593 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001261s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:58743 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.027871268s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:44517 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002274s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:35998 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000219501s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:58770 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012982932s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:55456 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174201s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:59031 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001304s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:41687 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000198401s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:46929 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003044s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:35877 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000325701s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:53705 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000318601s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:40560 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164401s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:53239 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001239s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:39754 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001464s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:41397 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001668s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:49126 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001646s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:37850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115501s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:44063 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001443s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:39924 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000607s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:53244 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000622s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:52017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001879s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:55488 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000814s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:57536 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000778s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:45454 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001788s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:52247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001095s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:46954 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001143s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:47574 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098701s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.1.2:36658 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000170301s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:35421 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001002s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:41995 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132201s
	I0419 18:59:17.408142   14960 command_runner.go:130] > [INFO] 10.244.0.3:36431 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001956s
	I0419 18:59:17.408696   14960 command_runner.go:130] > [INFO] 10.244.0.3:38168 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000222s
	I0419 18:59:17.408696   14960 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0419 18:59:17.408696   14960 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0419 18:59:17.411627   14960 logs.go:123] Gathering logs for kube-scheduler [e476774b8f77] ...
	I0419 18:59:17.411664   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e476774b8f77"
	I0419 18:59:17.438622   14960 command_runner.go:130] ! I0420 01:35:03.474569       1 serving.go:380] Generated self-signed cert in-memory
	I0419 18:59:17.439412   14960 command_runner.go:130] ! W0420 01:35:04.965330       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0419 18:59:17.439487   14960 command_runner.go:130] ! W0420 01:35:04.965379       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:17.439487   14960 command_runner.go:130] ! W0420 01:35:04.965392       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0419 18:59:17.439487   14960 command_runner.go:130] ! W0420 01:35:04.965399       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0419 18:59:17.439546   14960 command_runner.go:130] ! I0420 01:35:05.040739       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0419 18:59:17.439584   14960 command_runner.go:130] ! I0420 01:35:05.040800       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:17.439584   14960 command_runner.go:130] ! I0420 01:35:05.044777       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0419 18:59:17.439584   14960 command_runner.go:130] ! I0420 01:35:05.045192       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 18:59:17.439648   14960 command_runner.go:130] ! I0420 01:35:05.045423       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:17.439648   14960 command_runner.go:130] ! I0420 01:35:05.046180       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 18:59:17.439705   14960 command_runner.go:130] ! W0420 01:35:05.063208       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:17.439705   14960 command_runner.go:130] ! E0420 01:35:05.064240       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:17.439766   14960 command_runner.go:130] ! W0420 01:35:05.063609       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.439798   14960 command_runner.go:130] ! E0420 01:35:05.065130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.439857   14960 command_runner.go:130] ! W0420 01:35:05.063676       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:17.439902   14960 command_runner.go:130] ! E0420 01:35:05.065433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:17.439936   14960 command_runner.go:130] ! W0420 01:35:05.063732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:17.439936   14960 command_runner.go:130] ! E0420 01:35:05.065801       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:17.440042   14960 command_runner.go:130] ! W0420 01:35:05.063780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:17.440042   14960 command_runner.go:130] ! E0420 01:35:05.066820       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:17.440096   14960 command_runner.go:130] ! W0420 01:35:05.063927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440136   14960 command_runner.go:130] ! E0420 01:35:05.067122       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440160   14960 command_runner.go:130] ! W0420 01:35:05.063973       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:17.440160   14960 command_runner.go:130] ! E0420 01:35:05.069517       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:17.440219   14960 command_runner.go:130] ! W0420 01:35:05.064025       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:17.440219   14960 command_runner.go:130] ! E0420 01:35:05.069884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:17.440285   14960 command_runner.go:130] ! W0420 01:35:05.064095       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:17.440285   14960 command_runner.go:130] ! E0420 01:35:05.070309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:17.440285   14960 command_runner.go:130] ! W0420 01:35:05.064163       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440368   14960 command_runner.go:130] ! E0420 01:35:05.070884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440432   14960 command_runner.go:130] ! W0420 01:35:05.070236       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:17.440432   14960 command_runner.go:130] ! E0420 01:35:05.071293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:17.440532   14960 command_runner.go:130] ! W0420 01:35:05.070677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:17.440561   14960 command_runner.go:130] ! E0420 01:35:05.072125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:17.440615   14960 command_runner.go:130] ! W0420 01:35:05.070741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:17.440656   14960 command_runner.go:130] ! E0420 01:35:05.073528       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:17.440681   14960 command_runner.go:130] ! W0420 01:35:05.072410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:17.440726   14960 command_runner.go:130] ! E0420 01:35:05.073910       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:17.440726   14960 command_runner.go:130] ! W0420 01:35:05.072540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440786   14960 command_runner.go:130] ! E0420 01:35:05.074332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440786   14960 command_runner.go:130] ! W0420 01:35:05.987809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:05.988072       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.078924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.079045       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.146102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.146225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.213142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.213279       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.278808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.279232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.310265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.311126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.333128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.333531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.355993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.356053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.356154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.356365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.490128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.490240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.496247       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.496709       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.552817       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.552917       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.607496       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.607914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.608255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.608488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! W0420 01:35:06.623642       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! E0420 01:35:06.624029       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0419 18:59:17.440919   14960 command_runner.go:130] ! I0420 01:35:09.746203       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 18:59:17.440919   14960 command_runner.go:130] ! I0420 01:55:30.893306       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0419 18:59:17.440919   14960 command_runner.go:130] ! I0420 01:55:30.893359       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0419 18:59:17.440919   14960 command_runner.go:130] ! I0420 01:55:30.893732       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 18:59:17.441969   14960 command_runner.go:130] ! E0420 01:55:30.894682       1 run.go:74] "command failed" err="finished without leader elect"
	I0419 18:59:17.452692   14960 logs.go:123] Gathering logs for kindnet [ae0b21715f86] ...
	I0419 18:59:17.452692   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0b21715f86"
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:36.715209       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:36.715359       1 main.go:107] hostIP = 172.19.42.24
	I0419 18:59:17.488866   14960 command_runner.go:130] ! podIP = 172.19.42.24
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:36.715480       1 main.go:116] setting mtu 1500 for CNI 
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:36.715877       1 main.go:146] kindnetd IP family: "ipv4"
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:36.806023       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:37.413197       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:37.413291       1 main.go:227] handling current node
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:37.413685       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:37.413745       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:37.414005       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.19.32.249 Flags: [] Table: 0} 
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:37.506308       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:37.506405       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:37.506676       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.19.37.59 Flags: [] Table: 0} 
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:47.525508       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:47.525608       1 main.go:227] handling current node
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:47.525629       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:47.525638       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:47.526101       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:47.526135       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:57.538448       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:57.538834       1 main.go:227] handling current node
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:57.538899       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:57.538926       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:57.539176       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:58:57.539274       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:59:07.555783       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:59:07.555932       1 main.go:227] handling current node
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:59:07.556426       1 main.go:223] Handling node with IPs: map[172.19.32.249:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:59:07.556438       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:59:07.556563       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0419 18:59:17.488866   14960 command_runner.go:130] ! I0420 01:59:07.556590       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0419 18:59:17.491823   14960 logs.go:123] Gathering logs for Docker ...
	I0419 18:59:17.491880   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 18:59:17.531874   14960 command_runner.go:130] > Apr 20 01:56:27 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:17.531942   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:17.531991   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:17.532063   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:17.532063   14960 command_runner.go:130] > Apr 20 01:56:27 minikube cri-dockerd[225]: time="2024-04-20T01:56:27Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0419 18:59:17.532063   14960 command_runner.go:130] > Apr 20 01:56:28 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:17.532134   14960 command_runner.go:130] > Apr 20 01:56:28 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:17.532198   14960 command_runner.go:130] > Apr 20 01:56:28 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:17.532223   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0419 18:59:17.532253   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0419 18:59:17.532253   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:17.532253   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:17.532334   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:17.532363   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:17.532363   14960 command_runner.go:130] > Apr 20 01:56:30 minikube cri-dockerd[409]: time="2024-04-20T01:56:30Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0419 18:59:17.532420   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:17.532420   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:17.532420   14960 command_runner.go:130] > Apr 20 01:56:30 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:17.532482   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:33 minikube cri-dockerd[430]: time="2024-04-20T01:56:33Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:33 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:56:35 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 systemd[1]: Starting Docker Application Container Engine...
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[657]: time="2024-04-20T01:57:18.710176447Z" level=info msg="Starting up"
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[657]: time="2024-04-20T01:57:18.711651787Z" level=info msg="containerd not running, starting managed containerd"
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[657]: time="2024-04-20T01:57:18.716746379Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=664
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.747165139Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778478063Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778645056Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778743452Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.778860747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.780842867Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.780950062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.532530   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781281849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:17.533078   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781381945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.533078   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781405744Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0419 18:59:17.533078   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781418543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.533154   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.781890324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.533154   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.782561296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.533154   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786065554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:17.533224   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786174049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.533224   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786324143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:17.533224   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.786418639Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0419 18:59:17.533315   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.787110911Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0419 18:59:17.533315   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.787239206Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0419 18:59:17.533315   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.787257405Z" level=info msg="metadata content store policy set" policy=shared
	I0419 18:59:17.533377   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794203322Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0419 18:59:17.533377   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794271219Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0419 18:59:17.533377   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794292218Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0419 18:59:17.533377   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794308818Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0419 18:59:17.533377   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794325217Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0419 18:59:17.533491   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794399514Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0419 18:59:17.533520   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.794805397Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0419 18:59:17.533520   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795021089Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0419 18:59:17.533564   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795123284Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0419 18:59:17.533564   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795209281Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0419 18:59:17.533564   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795227280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.533620   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795252079Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.533620   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795270178Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.533682   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795305177Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.533682   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795321176Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.533682   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795336476Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.533748   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795368674Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.533748   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795383074Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.533748   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795405873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.533748   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795423972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.533748   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795438172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.533837   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795453671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.533837   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795468970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.533901   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795483970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.533901   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795576866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.533901   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795594465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.533953   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795610465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.533953   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795628364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.533953   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795642863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.534042   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795657163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.534042   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795671762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.534042   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795713760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0419 18:59:17.534108   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795756259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.534108   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795811856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.534108   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795843255Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0419 18:59:17.534175   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795920052Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0419 18:59:17.534175   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.795944151Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0419 18:59:17.534175   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796175542Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0419 18:59:17.534246   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796194141Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0419 18:59:17.534246   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796263238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.534246   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796305336Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0419 18:59:17.534246   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.796319336Z" level=info msg="NRI interface is disabled by configuration."
	I0419 18:59:17.534366   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.797416591Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0419 18:59:17.534366   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.797499188Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0419 18:59:17.534366   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.797659381Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0419 18:59:17.534366   14960 command_runner.go:130] > Apr 20 01:57:18 multinode-348000 dockerd[664]: time="2024-04-20T01:57:18.798178860Z" level=info msg="containerd successfully booted in 0.054054s"
	I0419 18:59:17.534366   14960 command_runner.go:130] > Apr 20 01:57:19 multinode-348000 dockerd[657]: time="2024-04-20T01:57:19.782299514Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0419 18:59:17.534366   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.015692930Z" level=info msg="Loading containers: start."
	I0419 18:59:17.534485   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.458486133Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0419 18:59:17.534485   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.551244732Z" level=info msg="Loading containers: done."
	I0419 18:59:17.534485   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.579065252Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	I0419 18:59:17.534485   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.579904847Z" level=info msg="Daemon has completed initialization"
	I0419 18:59:17.534485   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.637363974Z" level=info msg="API listen on [::]:2376"
	I0419 18:59:17.534485   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 systemd[1]: Started Docker Application Container Engine.
	I0419 18:59:17.534662   14960 command_runner.go:130] > Apr 20 01:57:20 multinode-348000 dockerd[657]: time="2024-04-20T01:57:20.639403561Z" level=info msg="API listen on /var/run/docker.sock"
	I0419 18:59:17.534662   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.472939019Z" level=info msg="Processing signal 'terminated'"
	I0419 18:59:17.534662   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 systemd[1]: Stopping Docker Application Container Engine...
	I0419 18:59:17.534662   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.475778002Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0419 18:59:17.534662   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.476696029Z" level=info msg="Daemon shutdown complete"
	I0419 18:59:17.534662   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.476992338Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0419 18:59:17.534809   14960 command_runner.go:130] > Apr 20 01:57:46 multinode-348000 dockerd[657]: time="2024-04-20T01:57:46.477157542Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0419 18:59:17.534809   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 systemd[1]: docker.service: Deactivated successfully.
	I0419 18:59:17.534809   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 systemd[1]: Stopped Docker Application Container Engine.
	I0419 18:59:17.534809   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 systemd[1]: Starting Docker Application Container Engine...
	I0419 18:59:17.534809   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:47.551071055Z" level=info msg="Starting up"
	I0419 18:59:17.534809   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:47.552229889Z" level=info msg="containerd not running, starting managed containerd"
	I0419 18:59:17.534809   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:47.555196776Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1058
	I0419 18:59:17.534931   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.593728507Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0419 18:59:17.534931   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623742487Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0419 18:59:17.534931   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623851391Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0419 18:59:17.534931   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623939793Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0419 18:59:17.534931   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.623957394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.534931   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624003795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:17.535068   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624024296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.535068   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624225802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:17.535068   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624329205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.535068   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624352205Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0419 18:59:17.535068   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624363806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.535068   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624391206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.535194   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.624622913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.535194   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.627825907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:17.535194   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.627876709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0419 18:59:17.535302   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628096615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0419 18:59:17.535302   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628227419Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0419 18:59:17.535302   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628259620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0419 18:59:17.535302   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628280321Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0419 18:59:17.535383   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628292621Z" level=info msg="metadata content store policy set" policy=shared
	I0419 18:59:17.535383   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628514127Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0419 18:59:17.535462   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628716033Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0419 18:59:17.535462   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628764035Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0419 18:59:17.535462   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628783935Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0419 18:59:17.535541   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628872138Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0419 18:59:17.535541   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.628938240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0419 18:59:17.535541   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.629513057Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0419 18:59:17.535541   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.629754764Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0419 18:59:17.535618   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.629936569Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0419 18:59:17.535618   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630060973Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0419 18:59:17.535618   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630086474Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.535697   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630105074Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.535697   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630122275Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.535697   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630140375Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.535697   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630157976Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.535786   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630174076Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.535786   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630191277Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.535786   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630206077Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0419 18:59:17.535862   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630234378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.535862   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630252178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.535862   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630267579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.535862   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630283379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.535945   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630298980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.535945   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630314780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.535945   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630328781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.535945   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630360082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.536032   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630377682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.536032   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630410083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.536032   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630423583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.536103   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630455984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.536103   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630487185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.536166   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630505186Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0419 18:59:17.536191   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630528987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630643490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630666391Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630895497Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630922398Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630934798Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.630945799Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.631020001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.631067102Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.631083303Z" level=info msg="NRI interface is disabled by configuration."
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632230736Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632319639Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632396541Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:47 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:47.632594347Z" level=info msg="containerd successfully booted in 0.042627s"
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:48 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:48.604760074Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:48 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:48.637031921Z" level=info msg="Loading containers: start."
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:48 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:48.936729515Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.021589305Z" level=info msg="Loading containers: done."
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.048182786Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.048316590Z" level=info msg="Daemon has completed initialization"
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.095567976Z" level=info msg="API listen on /var/run/docker.sock"
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 systemd[1]: Started Docker Application Container Engine.
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:49 multinode-348000 dockerd[1052]: time="2024-04-20T01:57:49.098304756Z" level=info msg="API listen on [::]:2376"
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0419 18:59:17.536220   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0419 18:59:17.536751   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0419 18:59:17.536751   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Start docker client with request timeout 0s"
	I0419 18:59:17.536751   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0419 18:59:17.536751   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Loaded network plugin cni"
	I0419 18:59:17.536751   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0419 18:59:17.536751   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0419 18:59:17.536751   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0419 18:59:17.536918   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0419 18:59:17.536918   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:50Z" level=info msg="Start cri-dockerd grpc backend"
	I0419 18:59:17.536945   14960 command_runner.go:130] > Apr 20 01:57:50 multinode-348000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0419 18:59:17.536945   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-xnz2k_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"476e3efb38684054cbc21c027cf1ddd3f9ca47bb829786f8636fd877fd4b2f81\""
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-7w477_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"2dd294415aae178d6b9bed0368d49bedc6d0afa8f5b9ad0011c73ffcb2c24b3c\""
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.930297132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.930785146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.930860749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:55.931659072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002064338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002134840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002149541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.002292345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e8baa597c1467ae8c3a1ce9abf0a378ddcffed5a93f7b41dddb4ce4511320dfd/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151299517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151377019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151407720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.151504323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169004837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169190142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169211543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.169324146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/118cca57d1f547838d0c2442f2945e9daf9b041170bf162489525286bf3d75c2/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7052a6f04def38545970026f2934eb29913066396b26eb86f6675e7c0c685db/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:17.537059   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:57:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ab9ff1d9068805d6a2ad10084128436e5b1fcaaa8c64f2f1a5e811455f0f99ee/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:17.537586   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441120322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.537586   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441388229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.537586   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441493933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537586   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.441783141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537586   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.541538868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.537728   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.541743874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.537787   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.541768275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537815   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.542244089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537847   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.635958239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.537847   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.636305549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.537847   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.636479754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.537941   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.636776363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538004   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.703176711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.538049   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.703241613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.538049   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.703253713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538049   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 dockerd[1058]: time="2024-04-20T01:57:56.704949863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538049   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:00Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0419 18:59:17.538131   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.682944236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.538131   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.683066839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.538211   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.683087340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538211   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.683203743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538211   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.775229244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.538287   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.775527153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.538287   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.775671457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538287   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.776004967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538362   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.791300015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.538362   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.791478721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.538362   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.791611925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538439   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:01.792335946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538439   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/09f65a695303814b61d199dd53caa1efad532c76b04176a404206b865fd6b38a/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:17.538516   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5472c1fba3929b8a427273be545db7fb7df3c0ffbf035e24a1d3b71418b9e031/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:17.538576   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.150688061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.538611   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.150834665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.151084573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.152395011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.341191051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.341388457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.341505460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.342279283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:58:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b5a777eba295e3b640d8d8a60aedcc20243d0f4a6fc4d3f3391b06fc6de0247a/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.851490425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.852225247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.852338750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:02.853459583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1052]: time="2024-04-20T01:58:23.324898945Z" level=info msg="ignoring event" container=f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:23.325982179Z" level=info msg="shim disconnected" id=f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919 namespace=moby
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:23.326071582Z" level=warning msg="cleaning up after shim disconnected" id=f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919 namespace=moby
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:23.326085983Z" level=info msg="cleaning up dead shim" namespace=moby
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1052]: time="2024-04-20T01:58:32.676558128Z" level=info msg="ignoring event" container=45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:32.681127769Z" level=info msg="shim disconnected" id=45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702 namespace=moby
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:32.681255073Z" level=warning msg="cleaning up after shim disconnected" id=45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702 namespace=moby
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:32.681323075Z" level=info msg="cleaning up dead shim" namespace=moby
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356286643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356444648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356547351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.538645   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:36.356850260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539171   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.371313874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.539171   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.372274603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.539253   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.372497010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539253   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 dockerd[1058]: time="2024-04-20T01:58:45.373020725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539253   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.468874089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.539327   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.469011493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.469033394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.469948221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.577907307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.578194516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.578360121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:05.578991939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:59:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f28a1e746a9b438367a8e05d2e1a085afb4abec4174f7a7eb80549e02b95047a/resolv.conf as [nameserver 172.19.32.1]"
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 cri-dockerd[1278]: time="2024-04-20T01:59:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/75ff9f4e9dde29a997e4321dd3659a2ce7d479a75826a78c4d3525f1eb5f696f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.046055457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.046333943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.046360842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.047301594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.170326341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.170444835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.170467134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:06 multinode-348000 dockerd[1058]: time="2024-04-20T01:59:06.171235195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.539360   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.539885   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.539885   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.539885   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.539885   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.539885   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.539885   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540038   14960 command_runner.go:130] > Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:12 multinode-348000 dockerd[1052]: 2024/04/20 01:59:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:16 multinode-348000 dockerd[1052]: 2024/04/20 01:59:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:16 multinode-348000 dockerd[1052]: 2024/04/20 01:59:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:16 multinode-348000 dockerd[1052]: 2024/04/20 01:59:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:16 multinode-348000 dockerd[1052]: 2024/04/20 01:59:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540070   14960 command_runner.go:130] > Apr 20 01:59:16 multinode-348000 dockerd[1052]: 2024/04/20 01:59:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540596   14960 command_runner.go:130] > Apr 20 01:59:17 multinode-348000 dockerd[1052]: 2024/04/20 01:59:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540596   14960 command_runner.go:130] > Apr 20 01:59:17 multinode-348000 dockerd[1052]: 2024/04/20 01:59:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540596   14960 command_runner.go:130] > Apr 20 01:59:17 multinode-348000 dockerd[1052]: 2024/04/20 01:59:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540596   14960 command_runner.go:130] > Apr 20 01:59:17 multinode-348000 dockerd[1052]: 2024/04/20 01:59:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.540596   14960 command_runner.go:130] > Apr 20 01:59:17 multinode-348000 dockerd[1052]: 2024/04/20 01:59:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0419 18:59:17.574706   14960 logs.go:123] Gathering logs for kubelet ...
	I0419 18:59:17.574706   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 18:59:17.606833   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: I0420 01:57:51.575772    1390 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: I0420 01:57:51.576306    1390 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: I0420 01:57:51.577194    1390 server.go:927] "Client rotation is on, will bootstrap in background"
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 kubelet[1390]: E0420 01:57:51.579651    1390 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:51 multinode-348000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: I0420 01:57:52.300689    1443 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: I0420 01:57:52.301056    1443 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: I0420 01:57:52.301551    1443 server.go:927] "Client rotation is on, will bootstrap in background"
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 kubelet[1443]: E0420 01:57:52.301845    1443 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:52 multinode-348000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.955182    1526 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.955367    1526 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.955676    1526 server.go:927] "Client rotation is on, will bootstrap in background"
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.957661    1526 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.971626    1526 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.998144    1526 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.998312    1526 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0419 18:59:17.606906   14960 command_runner.go:130] > Apr 20 01:57:54 multinode-348000 kubelet[1526]: I0420 01:57:54.999775    1526 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0419 18:59:17.607527   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:54.999948    1526 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-348000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0419 18:59:17.607527   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.000770    1526 topology_manager.go:138] "Creating topology manager with none policy"
	I0419 18:59:17.607686   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.000879    1526 container_manager_linux.go:301] "Creating device plugin manager"
	I0419 18:59:17.607686   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.001855    1526 state_mem.go:36] "Initialized new in-memory state store"
	I0419 18:59:17.607716   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.003861    1526 kubelet.go:400] "Attempting to sync node with API server"
	I0419 18:59:17.607716   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.003952    1526 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0419 18:59:17.607773   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.004045    1526 kubelet.go:312] "Adding apiserver pod source"
	I0419 18:59:17.607773   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.009472    1526 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0419 18:59:17.607773   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.017989    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.607908   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.018091    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.607908   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.019381    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.607908   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.019428    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.607994   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.019619    1526 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.1" apiVersion="v1"
	I0419 18:59:17.607994   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.022328    1526 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0419 18:59:17.608030   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.023051    1526 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0419 18:59:17.608030   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.025680    1526 server.go:1264] "Started kubelet"
	I0419 18:59:17.608030   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.028955    1526 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0419 18:59:17.608105   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.031361    1526 server.go:455] "Adding debug handlers to kubelet server"
	I0419 18:59:17.608105   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.034499    1526 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0419 18:59:17.608173   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.035670    1526 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0419 18:59:17.608232   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.036524    1526 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.19.42.24:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-348000.17c7da5cb9bb1787  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-348000,UID:multinode-348000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-348000,},FirstTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,LastTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-348
000,}"
	I0419 18:59:17.608283   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.053292    1526 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0419 18:59:17.608319   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.062175    1526 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0419 18:59:17.608319   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.067879    1526 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0419 18:59:17.608386   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.097159    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="200ms"
	I0419 18:59:17.608408   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.116285    1526 factory.go:221] Registration of the systemd container factory successfully
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.117073    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.118285    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.117970    1526 reconciler.go:26] "Reconciler: start to sync state"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.118962    1526 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.119576    1526 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.135081    1526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.165861    1526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166700    1526 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166759    1526 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166846    1526 state_mem.go:36] "Initialized new in-memory state store"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.166997    1526 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168395    1526 kubelet.go:2337] "Starting kubelet main sync loop"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.168500    1526 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168338    1526 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168585    1526 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.168613    1526 policy_none.go:49] "None policy: Start"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.167637    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.171087    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.172453    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.172557    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.187830    1526 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.187946    1526 state_mem.go:35] "Initializing new in-memory state store"
	I0419 18:59:17.608436   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.189368    1526 state_mem.go:75] "Updated machine memory state"
	I0419 18:59:17.608963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.195268    1526 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0419 18:59:17.608963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.195483    1526 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0419 18:59:17.608963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.197626    1526 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0419 18:59:17.608963   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.198638    1526 iptables.go:577] "Could not set up iptables canary" err=<
	I0419 18:59:17.609046   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0419 18:59:17.609079   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0419 18:59:17.609079   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0419 18:59:17.609122   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0419 18:59:17.609122   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.201551    1526 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-348000\" not found"
	I0419 18:59:17.609206   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.269451    1526 topology_manager.go:215] "Topology Admit Handler" podUID="30aa2729d0c65b9f89e1ae2d151edd9b" podNamespace="kube-system" podName="kube-controller-manager-multinode-348000"
	I0419 18:59:17.609206   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.271913    1526 topology_manager.go:215] "Topology Admit Handler" podUID="92813b2aed63b63058d3fd06709fa24e" podNamespace="kube-system" podName="kube-scheduler-multinode-348000"
	I0419 18:59:17.609206   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.273779    1526 topology_manager.go:215] "Topology Admit Handler" podUID="af7a3c9321ace7e2a933260472b90113" podNamespace="kube-system" podName="kube-apiserver-multinode-348000"
	I0419 18:59:17.609287   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.275662    1526 topology_manager.go:215] "Topology Admit Handler" podUID="c0cfa3da6a3913c3e67500f6c3e9d72b" podNamespace="kube-system" podName="etcd-multinode-348000"
	I0419 18:59:17.609287   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.281258    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="476e3efb38684054cbc21c027cf1ddd3f9ca47bb829786f8636fd877fd4b2f81"
	I0419 18:59:17.609377   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.281433    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dd294415aae178d6b9bed0368d49bedc6d0afa8f5b9ad0011c73ffcb2c24b3c"
	I0419 18:59:17.609377   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.281454    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5d733991bf1a9e82ffd10768e0652c6c3f983ab24307142345cab3358f068bc"
	I0419 18:59:17.609563   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.297657    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd9e5fae3950c99e6cc71d6166919d407b00212c93827d74e5b83f3896925c0a"
	I0419 18:59:17.609563   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.310354    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="400ms"
	I0419 18:59:17.609643   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.316552    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="187cb57784f4ebcba88e5bf725c118a7d2beec4f543d3864e8f389573f0b11f9"
	I0419 18:59:17.609643   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.332421    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e420625b84be10aa87409a43f4296165b33ed76e82c3ba8a9214abd7177bd38"
	I0419 18:59:17.609719   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.356050    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00d48e11227effb5f0316d58c24e374b4b3f9dcd1b98ac51d6b0038a72d47e42"
	I0419 18:59:17.609719   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.372330    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:17.609795   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.373779    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:17.609795   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.376042    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da1d06ec238f43c7ad43cae75e142a6d15b9c8fb69f88ad8079f167f3f3a6fd4"
	I0419 18:59:17.609795   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.392858    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7935893e9f22a54393d2b3d0a644f7c11a848d5604938074232342a8602e239f"
	I0419 18:59:17.609872   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423082    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-ca-certs\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:17.609872   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423312    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-flexvolume-dir\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:17.609960   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423400    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-k8s-certs\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:17.610043   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423427    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-kubeconfig\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:17.610043   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423456    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af7a3c9321ace7e2a933260472b90113-ca-certs\") pod \"kube-apiserver-multinode-348000\" (UID: \"af7a3c9321ace7e2a933260472b90113\") " pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:17.610109   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423489    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/c0cfa3da6a3913c3e67500f6c3e9d72b-etcd-data\") pod \"etcd-multinode-348000\" (UID: \"c0cfa3da6a3913c3e67500f6c3e9d72b\") " pod="kube-system/etcd-multinode-348000"
	I0419 18:59:17.610194   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423525    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30aa2729d0c65b9f89e1ae2d151edd9b-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-348000\" (UID: \"30aa2729d0c65b9f89e1ae2d151edd9b\") " pod="kube-system/kube-controller-manager-multinode-348000"
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423552    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92813b2aed63b63058d3fd06709fa24e-kubeconfig\") pod \"kube-scheduler-multinode-348000\" (UID: \"92813b2aed63b63058d3fd06709fa24e\") " pod="kube-system/kube-scheduler-multinode-348000"
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423669    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af7a3c9321ace7e2a933260472b90113-k8s-certs\") pod \"kube-apiserver-multinode-348000\" (UID: \"af7a3c9321ace7e2a933260472b90113\") " pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423703    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af7a3c9321ace7e2a933260472b90113-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-348000\" (UID: \"af7a3c9321ace7e2a933260472b90113\") " pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.423739    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/c0cfa3da6a3913c3e67500f6c3e9d72b-etcd-certs\") pod \"etcd-multinode-348000\" (UID: \"c0cfa3da6a3913c3e67500f6c3e9d72b\") " pod="kube-system/etcd-multinode-348000"
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.518144    1526 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.19.42.24:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-348000.17c7da5cb9bb1787  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-348000,UID:multinode-348000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-348000,},FirstTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,LastTimestamp:2024-04-20 01:57:55.025655687 +0000 UTC m=+0.185818354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-348
000,}"
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.713067    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="800ms"
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: I0420 01:57:55.777032    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.778597    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: W0420 01:57:55.832721    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:55 multinode-348000 kubelet[1526]: E0420 01:57:55.832971    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-348000&limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: W0420 01:57:56.061439    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.063005    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: W0420 01:57:56.073517    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.610223   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.073647    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.610749   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: W0420 01:57:56.303763    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.610749   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.303918    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.42.24:8443: connect: connection refused
	I0419 18:59:17.610749   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.515345    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-348000?timeout=10s\": dial tcp 172.19.42.24:8443: connect: connection refused" interval="1.6s"
	I0419 18:59:17.610749   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: I0420 01:57:56.583532    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:17.610749   14960 command_runner.go:130] > Apr 20 01:57:56 multinode-348000 kubelet[1526]: E0420 01:57:56.584646    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.42.24:8443: connect: connection refused" node="multinode-348000"
	I0419 18:59:17.610908   14960 command_runner.go:130] > Apr 20 01:57:58 multinode-348000 kubelet[1526]: I0420 01:57:58.185924    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-348000"
	I0419 18:59:17.610908   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.850138    1526 kubelet_node_status.go:112] "Node was previously registered" node="multinode-348000"
	I0419 18:59:17.610908   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.850459    1526 kubelet_node_status.go:76] "Successfully registered node" node="multinode-348000"
	I0419 18:59:17.610908   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.852895    1526 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0419 18:59:17.610908   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.854574    1526 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0419 18:59:17.611069   14960 command_runner.go:130] > Apr 20 01:58:00 multinode-348000 kubelet[1526]: I0420 01:58:00.855598    1526 setters.go:580] "Node became not ready" node="multinode-348000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-04-20T01:58:00Z","lastTransitionTime":"2024-04-20T01:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.022496    1526 apiserver.go:52] "Watching apiserver"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.028549    1526 topology_manager.go:215] "Topology Admit Handler" podUID="274342c4-c21f-4279-b0ea-743d8e2c1463" podNamespace="kube-system" podName="kube-proxy-kj76x"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.028950    1526 topology_manager.go:215] "Topology Admit Handler" podUID="46c91d5e-edfa-4254-a802-148047caeab5" podNamespace="kube-system" podName="kindnet-s4fsr"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.029150    1526 topology_manager.go:215] "Topology Admit Handler" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7w477"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.029359    1526 topology_manager.go:215] "Topology Admit Handler" podUID="ffa0cfb9-91fb-4d5b-abe7-11992c731b74" podNamespace="kube-system" podName="storage-provisioner"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.029596    1526 topology_manager.go:215] "Topology Admit Handler" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916" podNamespace="default" podName="busybox-fc5497c4f-xnz2k"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.030004    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.030339    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-348000" podUID="af4afa87-c484-4b73-9a4d-e86ddcd90049"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.031127    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-348000" podUID="18f5e677-6a96-47ee-9f61-60ab9445eb92"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.036486    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.078433    1526 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-348000"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.080072    1526 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.080948    1526 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-348000"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.155980    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/274342c4-c21f-4279-b0ea-743d8e2c1463-xtables-lock\") pod \"kube-proxy-kj76x\" (UID: \"274342c4-c21f-4279-b0ea-743d8e2c1463\") " pod="kube-system/kube-proxy-kj76x"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.156217    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/274342c4-c21f-4279-b0ea-743d8e2c1463-lib-modules\") pod \"kube-proxy-kj76x\" (UID: \"274342c4-c21f-4279-b0ea-743d8e2c1463\") " pod="kube-system/kube-proxy-kj76x"
	I0419 18:59:17.611117   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157104    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/46c91d5e-edfa-4254-a802-148047caeab5-cni-cfg\") pod \"kindnet-s4fsr\" (UID: \"46c91d5e-edfa-4254-a802-148047caeab5\") " pod="kube-system/kindnet-s4fsr"
	I0419 18:59:17.611703   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157248    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46c91d5e-edfa-4254-a802-148047caeab5-xtables-lock\") pod \"kindnet-s4fsr\" (UID: \"46c91d5e-edfa-4254-a802-148047caeab5\") " pod="kube-system/kindnet-s4fsr"
	I0419 18:59:17.611873   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.157178    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:17.611873   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.157539    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:01.657504317 +0000 UTC m=+6.817666984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:17.611966   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157392    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ffa0cfb9-91fb-4d5b-abe7-11992c731b74-tmp\") pod \"storage-provisioner\" (UID: \"ffa0cfb9-91fb-4d5b-abe7-11992c731b74\") " pod="kube-system/storage-provisioner"
	I0419 18:59:17.611966   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.157844    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46c91d5e-edfa-4254-a802-148047caeab5-lib-modules\") pod \"kindnet-s4fsr\" (UID: \"46c91d5e-edfa-4254-a802-148047caeab5\") " pod="kube-system/kindnet-s4fsr"
	I0419 18:59:17.612051   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.176143    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89aa15d5f8e328791151d96100a36918" path="/var/lib/kubelet/pods/89aa15d5f8e328791151d96100a36918/volumes"
	I0419 18:59:17.612079   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.179130    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fef0b92f87f018a58c19217fdf5d6e1" path="/var/lib/kubelet/pods/8fef0b92f87f018a58c19217fdf5d6e1/volumes"
	I0419 18:59:17.612115   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.206903    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612150   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.207139    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.207264    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:01.707244177 +0000 UTC m=+6.867406744 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.241569    1526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-348000" podStartSLOduration=0.241545984 podStartE2EDuration="241.545984ms" podCreationTimestamp="2024-04-20 01:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-20 01:58:01.218870918 +0000 UTC m=+6.379033485" watchObservedRunningTime="2024-04-20 01:58:01.241545984 +0000 UTC m=+6.401708551"
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: I0420 01:58:01.287607    1526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-348000" podStartSLOduration=0.287584435 podStartE2EDuration="287.584435ms" podCreationTimestamp="2024-04-20 01:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-20 01:58:01.265671392 +0000 UTC m=+6.425834059" watchObservedRunningTime="2024-04-20 01:58:01.287584435 +0000 UTC m=+6.447747102"
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.663973    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.664077    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:02.664058382 +0000 UTC m=+7.824220949 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.764474    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.764518    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:01 multinode-348000 kubelet[1526]: E0420 01:58:01.764584    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:02.764566131 +0000 UTC m=+7.924728698 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: I0420 01:58:02.563904    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5a777eba295e3b640d8d8a60aedcc20243d0f4a6fc4d3f3391b06fc6de0247a"
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.564077    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: I0420 01:58:02.565075    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-348000" podUID="af4afa87-c484-4b73-9a4d-e86ddcd90049"
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.679358    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.679588    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:04.67956768 +0000 UTC m=+9.839730247 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.789713    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.791860    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612182   14960 command_runner.go:130] > Apr 20 01:58:02 multinode-348000 kubelet[1526]: E0420 01:58:02.792206    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:04.792183185 +0000 UTC m=+9.952345752 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612773   14960 command_runner.go:130] > Apr 20 01:58:03 multinode-348000 kubelet[1526]: E0420 01:58:03.170851    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.612818   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.169519    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.612818   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.700421    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.700676    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:08.700644486 +0000 UTC m=+13.860807053 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.801637    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.801751    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:04 multinode-348000 kubelet[1526]: E0420 01:58:04.801874    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:08.801835856 +0000 UTC m=+13.961998423 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:05 multinode-348000 kubelet[1526]: E0420 01:58:05.169947    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:06 multinode-348000 kubelet[1526]: E0420 01:58:06.169499    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:07 multinode-348000 kubelet[1526]: E0420 01:58:07.170147    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.169208    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.751778    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.752347    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:16.752328447 +0000 UTC m=+21.912491114 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.852291    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.852347    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.612894   14960 command_runner.go:130] > Apr 20 01:58:08 multinode-348000 kubelet[1526]: E0420 01:58:08.852455    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:16.852435774 +0000 UTC m=+22.012598341 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.613486   14960 command_runner.go:130] > Apr 20 01:58:09 multinode-348000 kubelet[1526]: E0420 01:58:09.169017    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.613536   14960 command_runner.go:130] > Apr 20 01:58:10 multinode-348000 kubelet[1526]: E0420 01:58:10.169399    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.613536   14960 command_runner.go:130] > Apr 20 01:58:11 multinode-348000 kubelet[1526]: E0420 01:58:11.169467    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.613638   14960 command_runner.go:130] > Apr 20 01:58:12 multinode-348000 kubelet[1526]: E0420 01:58:12.169441    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.613659   14960 command_runner.go:130] > Apr 20 01:58:13 multinode-348000 kubelet[1526]: E0420 01:58:13.169983    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.613741   14960 command_runner.go:130] > Apr 20 01:58:14 multinode-348000 kubelet[1526]: E0420 01:58:14.169635    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.613788   14960 command_runner.go:130] > Apr 20 01:58:15 multinode-348000 kubelet[1526]: E0420 01:58:15.169488    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.613820   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.169756    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.613886   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.835157    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.835299    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:58:32.835279204 +0000 UTC m=+37.995441771 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.936116    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.936169    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:16 multinode-348000 kubelet[1526]: E0420 01:58:16.936232    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:58:32.936212581 +0000 UTC m=+38.096375148 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:17 multinode-348000 kubelet[1526]: E0420 01:58:17.169160    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:18 multinode-348000 kubelet[1526]: E0420 01:58:18.171760    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:19 multinode-348000 kubelet[1526]: E0420 01:58:19.169723    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:20 multinode-348000 kubelet[1526]: E0420 01:58:20.169542    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:21 multinode-348000 kubelet[1526]: E0420 01:58:21.169675    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:22 multinode-348000 kubelet[1526]: E0420 01:58:22.169364    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.613914   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: E0420 01:58:23.169569    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.614447   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: I0420 01:58:23.960680    1526 scope.go:117] "RemoveContainer" containerID="8a37c65d06fabf8d836ffb9a511bb6df5b549fa37051ef79f1f839076af60512"
	I0419 18:59:17.614447   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: I0420 01:58:23.961154    1526 scope.go:117] "RemoveContainer" containerID="f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919"
	I0419 18:59:17.614506   14960 command_runner.go:130] > Apr 20 01:58:23 multinode-348000 kubelet[1526]: E0420 01:58:23.961603    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kindnet-cni pod=kindnet-s4fsr_kube-system(46c91d5e-edfa-4254-a802-148047caeab5)\"" pod="kube-system/kindnet-s4fsr" podUID="46c91d5e-edfa-4254-a802-148047caeab5"
	I0419 18:59:17.614506   14960 command_runner.go:130] > Apr 20 01:58:24 multinode-348000 kubelet[1526]: E0420 01:58:24.169608    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.614606   14960 command_runner.go:130] > Apr 20 01:58:25 multinode-348000 kubelet[1526]: E0420 01:58:25.169976    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.614606   14960 command_runner.go:130] > Apr 20 01:58:26 multinode-348000 kubelet[1526]: E0420 01:58:26.169734    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.614667   14960 command_runner.go:130] > Apr 20 01:58:27 multinode-348000 kubelet[1526]: E0420 01:58:27.170054    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.614667   14960 command_runner.go:130] > Apr 20 01:58:28 multinode-348000 kubelet[1526]: E0420 01:58:28.169260    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.614667   14960 command_runner.go:130] > Apr 20 01:58:29 multinode-348000 kubelet[1526]: E0420 01:58:29.169306    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.614667   14960 command_runner.go:130] > Apr 20 01:58:30 multinode-348000 kubelet[1526]: E0420 01:58:30.169857    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.614667   14960 command_runner.go:130] > Apr 20 01:58:31 multinode-348000 kubelet[1526]: E0420 01:58:31.169543    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.614667   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.169556    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.614667   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.891318    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0419 18:59:17.614667   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.891496    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume podName:895ddde9-466d-4abf-b6f4-594847b26c6c nodeName:}" failed. No retries permitted until 2024-04-20 01:59:04.891477649 +0000 UTC m=+70.051640216 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/895ddde9-466d-4abf-b6f4-594847b26c6c-config-volume") pod "coredns-7db6d8ff4d-7w477" (UID: "895ddde9-466d-4abf-b6f4-594847b26c6c") : object "kube-system"/"coredns" not registered
	I0419 18:59:17.615273   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.992269    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.615273   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.992577    1526 projected.go:200] Error preparing data for projected volume kube-api-access-d86jr for pod default/busybox-fc5497c4f-xnz2k: object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.615561   14960 command_runner.go:130] > Apr 20 01:58:32 multinode-348000 kubelet[1526]: E0420 01:58:32.992723    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr podName:7aa2ff69-7aaf-48d7-905e-15ad43a94916 nodeName:}" failed. No retries permitted until 2024-04-20 01:59:04.992688767 +0000 UTC m=+70.152851434 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-d86jr" (UniqueName: "kubernetes.io/projected/7aa2ff69-7aaf-48d7-905e-15ad43a94916-kube-api-access-d86jr") pod "busybox-fc5497c4f-xnz2k" (UID: "7aa2ff69-7aaf-48d7-905e-15ad43a94916") : object "default"/"kube-root-ca.crt" not registered
	I0419 18:59:17.615631   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: I0420 01:58:33.115355    1526 scope.go:117] "RemoveContainer" containerID="e248c230a4aa379bf469f41a95d1ea2033316d322a10b6da0ae06f656334b936"
	I0419 18:59:17.615653   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: I0420 01:58:33.115897    1526 scope.go:117] "RemoveContainer" containerID="45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702"
	I0419 18:59:17.615695   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: E0420 01:58:33.116183    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ffa0cfb9-91fb-4d5b-abe7-11992c731b74)\"" pod="kube-system/storage-provisioner" podUID="ffa0cfb9-91fb-4d5b-abe7-11992c731b74"
	I0419 18:59:17.615734   14960 command_runner.go:130] > Apr 20 01:58:33 multinode-348000 kubelet[1526]: E0420 01:58:33.169303    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.615781   14960 command_runner.go:130] > Apr 20 01:58:34 multinode-348000 kubelet[1526]: E0420 01:58:34.169175    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:35 multinode-348000 kubelet[1526]: E0420 01:58:35.169508    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 kubelet[1526]: E0420 01:58:36.169960    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:36 multinode-348000 kubelet[1526]: I0420 01:58:36.170769    1526 scope.go:117] "RemoveContainer" containerID="f8c798c9940780f4e5b477f820c4feed7a3fa7ff6e679dc3a5dc398b7e2d6919"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:37 multinode-348000 kubelet[1526]: E0420 01:58:37.171433    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:38 multinode-348000 kubelet[1526]: E0420 01:58:38.169747    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:39 multinode-348000 kubelet[1526]: E0420 01:58:39.169252    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7w477" podUID="895ddde9-466d-4abf-b6f4-594847b26c6c"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:40 multinode-348000 kubelet[1526]: E0420 01:58:40.169368    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xnz2k" podUID="7aa2ff69-7aaf-48d7-905e-15ad43a94916"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:40 multinode-348000 kubelet[1526]: I0420 01:58:40.269590    1526 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:45 multinode-348000 kubelet[1526]: I0420 01:58:45.169759    1526 scope.go:117] "RemoveContainer" containerID="45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]: I0420 01:58:55.162183    1526 scope.go:117] "RemoveContainer" containerID="490377504e57c3189163833390967e79bb80d222691d4402677feb6f25ed22f4"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]: I0420 01:58:55.206283    1526 scope.go:117] "RemoveContainer" containerID="53f6a00490766be2eb687e6fff052ca7a46ae16a0baf4551e956c81550d673b2"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]: E0420 01:58:55.212558    1526 iptables.go:577] "Could not set up iptables canary" err=<
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:58:55 multinode-348000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 kubelet[1526]: I0420 01:59:05.918992    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75ff9f4e9dde29a997e4321dd3659a2ce7d479a75826a78c4d3525f1eb5f696f"
	I0419 18:59:17.615821   14960 command_runner.go:130] > Apr 20 01:59:05 multinode-348000 kubelet[1526]: I0420 01:59:05.948376    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f28a1e746a9b438367a8e05d2e1a085afb4abec4174f7a7eb80549e02b95047a"
	I0419 18:59:17.662136   14960 logs.go:123] Gathering logs for etcd [2deabe4dbdf4] ...
	I0419 18:59:17.663139   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2deabe4dbdf4"
	I0419 18:59:17.696737   14960 command_runner.go:130] ! {"level":"warn","ts":"2024-04-20T01:57:57.046906Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0419 18:59:17.696981   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.051203Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.19.42.24:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.19.42.24:2380","--initial-cluster=multinode-348000=https://172.19.42.24:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.19.42.24:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.19.42.24:2380","--name=multinode-348000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-ref
resh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0419 18:59:17.696981   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.05132Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0419 18:59:17.696981   14960 command_runner.go:130] ! {"level":"warn","ts":"2024-04-20T01:57:57.053068Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0419 18:59:17.696981   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.053085Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.19.42.24:2380"]}
	I0419 18:59:17.696981   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.053402Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0419 18:59:17.697091   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.06821Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.19.42.24:2379"]}
	I0419 18:59:17.697141   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.071769Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-348000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.19.42.24:2380"],"listen-peer-urls":["https://172.19.42.24:2380"],"advertise-client-urls":["https://172.19.42.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.42.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cl
uster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0419 18:59:17.697240   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.117145Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"37.959314ms"}
	I0419 18:59:17.697240   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.163657Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0419 18:59:17.697240   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186114Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","commit-index":1996}
	I0419 18:59:17.697240   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c switched to configuration voters=()"}
	I0419 18:59:17.697337   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became follower at term 2"}
	I0419 18:59:17.697337   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.186867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 4fba18389b33806c [peers: [], term: 2, commit: 1996, applied: 0, lastindex: 1996, lastterm: 2]"}
	I0419 18:59:17.697337   14960 command_runner.go:130] ! {"level":"warn","ts":"2024-04-20T01:57:57.204366Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0419 18:59:17.697395   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.210889Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1364}
	I0419 18:59:17.697395   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.22333Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1726}
	I0419 18:59:17.697395   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.233905Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0419 18:59:17.697464   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.247902Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"4fba18389b33806c","timeout":"7s"}
	I0419 18:59:17.697482   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.252957Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"4fba18389b33806c"}
	I0419 18:59:17.697507   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.253239Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"4fba18389b33806c","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0419 18:59:17.697507   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.257675Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0419 18:59:17.697580   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.259962Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0419 18:59:17.697580   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.260237Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0419 18:59:17.697636   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.26046Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0419 18:59:17.697674   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c switched to configuration voters=(5744930906065567852)"}
	I0419 18:59:17.697732   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264281Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","added-peer-id":"4fba18389b33806c","added-peer-peer-urls":["https://172.19.42.231:2380"]}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264439Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","cluster-version":"3.5"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.264612Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.271976Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.273753Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4fba18389b33806c","initial-advertise-peer-urls":["https://172.19.42.24:2380"],"listen-peer-urls":["https://172.19.42.24:2380"],"advertise-client-urls":["https://172.19.42.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.42.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.27526Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.27622Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.42.24:2380"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:57.277207Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.42.24:2380"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c is starting a new election at term 2"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became pre-candidate at term 2"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c received MsgPreVoteResp from 4fba18389b33806c at term 2"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became candidate at term 3"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c received MsgVoteResp from 4fba18389b33806c at term 3"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became leader at term 3"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.988399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4fba18389b33806c elected leader 4fba18389b33806c at term 3"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.994477Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4fba18389b33806c","local-member-attributes":"{Name:multinode-348000 ClientURLs:[https://172.19.42.24:2379]}","request-path":"/0/members/4fba18389b33806c/attributes","cluster-id":"dca2ede42d67bc1c","publish-timeout":"7s"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.994493Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.994512Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.996572Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.996617Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.999043Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.42.24:2379"}
	I0419 18:59:17.697773   14960 command_runner.go:130] ! {"level":"info","ts":"2024-04-20T01:57:58.999341Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0419 18:59:17.706971   14960 logs.go:123] Gathering logs for coredns [352cf21a3e20] ...
	I0419 18:59:17.706971   14960 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 352cf21a3e20"
	I0419 18:59:17.736117   14960 command_runner.go:130] > .:53
	I0419 18:59:17.736117   14960 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93714cfd58e203ac2baa48ea9c7b435951d2a9faed7a5c70b4e84c89c6c1fe4c1dfa41f14b3ebf0f5941dade673a82eaad960061e673dd78dcb856db3393b39d
	I0419 18:59:17.736117   14960 command_runner.go:130] > CoreDNS-1.11.1
	I0419 18:59:17.736117   14960 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0419 18:59:17.736117   14960 command_runner.go:130] > [INFO] 127.0.0.1:51206 - 14298 "HINFO IN 4972057462503628469.2167329557243878603. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028297062s
	I0419 18:59:20.236754   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods
	I0419 18:59:20.236754   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:20.236754   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:20.236754   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:20.244481   14960 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 18:59:20.244481   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:20.244481   14960 round_trippers.go:580]     Audit-Id: cfc1e882-a2ad-48e3-81e8-5eb5b902c307
	I0419 18:59:20.244481   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:20.244481   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:20.244481   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:20.244481   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:20.244481   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:20 GMT
	I0419 18:59:20.246285   14960 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1957"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1944","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86494 chars]
	I0419 18:59:20.250417   14960 system_pods.go:59] 12 kube-system pods found
	I0419 18:59:20.250417   14960 system_pods.go:61] "coredns-7db6d8ff4d-7w477" [895ddde9-466d-4abf-b6f4-594847b26c6c] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "etcd-multinode-348000" [33702588-cdf3-4577-b18d-18415cca2c25] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "kindnet-mg8qs" [c6e448a2-6f0c-4c7f-aa8b-0d585c84b09e] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "kindnet-s4fsr" [46c91d5e-edfa-4254-a802-148047caeab5] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "kindnet-s98rh" [551f5bde-7c56-4023-ad92-a2d7a122da60] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "kube-apiserver-multinode-348000" [13adbf1b-6c17-47a9-951d-2481680a47bd] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "kube-controller-manager-multinode-348000" [299bb088-9795-4452-87a8-5e96bcacedde] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "kube-proxy-2jjsq" [f9666ab7-0d1f-4800-b979-6e38fecdc518] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "kube-proxy-bjv9b" [3e909d14-543a-4734-8c17-7e2b8188553d] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "kube-proxy-kj76x" [274342c4-c21f-4279-b0ea-743d8e2c1463] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "kube-scheduler-multinode-348000" [000cfafe-a513-4738-9de2-3c25244b72be] Running
	I0419 18:59:20.250417   14960 system_pods.go:61] "storage-provisioner" [ffa0cfb9-91fb-4d5b-abe7-11992c731b74] Running
	I0419 18:59:20.250972   14960 system_pods.go:74] duration metric: took 3.824064s to wait for pod list to return data ...
	I0419 18:59:20.251074   14960 default_sa.go:34] waiting for default service account to be created ...
	I0419 18:59:20.251074   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/default/serviceaccounts
	I0419 18:59:20.251074   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:20.251074   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:20.251074   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:20.255435   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:20.255840   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:20.255840   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:20.255840   14960 round_trippers.go:580]     Content-Length: 262
	I0419 18:59:20.255840   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:20 GMT
	I0419 18:59:20.255840   14960 round_trippers.go:580]     Audit-Id: 7b5244ac-421e-4c65-90cc-38ccffaafc57
	I0419 18:59:20.255840   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:20.255840   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:20.255840   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:20.255840   14960 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1957"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"fd56f1e7-7816-4124-aeed-e48a3ea6b7a7","resourceVersion":"301","creationTimestamp":"2024-04-20T01:35:22Z"}}]}
	I0419 18:59:20.255840   14960 default_sa.go:45] found service account: "default"
	I0419 18:59:20.255840   14960 default_sa.go:55] duration metric: took 4.7668ms for default service account to be created ...
	I0419 18:59:20.255840   14960 system_pods.go:116] waiting for k8s-apps to be running ...
	I0419 18:59:20.255840   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods
	I0419 18:59:20.256425   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:20.256425   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:20.256425   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:20.261095   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 18:59:20.261095   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:20.261095   14960 round_trippers.go:580]     Audit-Id: d36db68a-0854-4d15-92ee-0523cdca6651
	I0419 18:59:20.261095   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:20.261624   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:20.261624   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:20.261624   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:20.261624   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:20 GMT
	I0419 18:59:20.263006   14960 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1957"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1944","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86494 chars]
	I0419 18:59:20.267133   14960 system_pods.go:86] 12 kube-system pods found
	I0419 18:59:20.267201   14960 system_pods.go:89] "coredns-7db6d8ff4d-7w477" [895ddde9-466d-4abf-b6f4-594847b26c6c] Running
	I0419 18:59:20.267201   14960 system_pods.go:89] "etcd-multinode-348000" [33702588-cdf3-4577-b18d-18415cca2c25] Running
	I0419 18:59:20.267201   14960 system_pods.go:89] "kindnet-mg8qs" [c6e448a2-6f0c-4c7f-aa8b-0d585c84b09e] Running
	I0419 18:59:20.267201   14960 system_pods.go:89] "kindnet-s4fsr" [46c91d5e-edfa-4254-a802-148047caeab5] Running
	I0419 18:59:20.267249   14960 system_pods.go:89] "kindnet-s98rh" [551f5bde-7c56-4023-ad92-a2d7a122da60] Running
	I0419 18:59:20.267249   14960 system_pods.go:89] "kube-apiserver-multinode-348000" [13adbf1b-6c17-47a9-951d-2481680a47bd] Running
	I0419 18:59:20.267249   14960 system_pods.go:89] "kube-controller-manager-multinode-348000" [299bb088-9795-4452-87a8-5e96bcacedde] Running
	I0419 18:59:20.267249   14960 system_pods.go:89] "kube-proxy-2jjsq" [f9666ab7-0d1f-4800-b979-6e38fecdc518] Running
	I0419 18:59:20.267308   14960 system_pods.go:89] "kube-proxy-bjv9b" [3e909d14-543a-4734-8c17-7e2b8188553d] Running
	I0419 18:59:20.267308   14960 system_pods.go:89] "kube-proxy-kj76x" [274342c4-c21f-4279-b0ea-743d8e2c1463] Running
	I0419 18:59:20.267308   14960 system_pods.go:89] "kube-scheduler-multinode-348000" [000cfafe-a513-4738-9de2-3c25244b72be] Running
	I0419 18:59:20.267308   14960 system_pods.go:89] "storage-provisioner" [ffa0cfb9-91fb-4d5b-abe7-11992c731b74] Running
	I0419 18:59:20.267308   14960 system_pods.go:126] duration metric: took 11.4671ms to wait for k8s-apps to be running ...
	I0419 18:59:20.267390   14960 system_svc.go:44] waiting for kubelet service to be running ....
	I0419 18:59:20.280549   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 18:59:20.308081   14960 system_svc.go:56] duration metric: took 40.5956ms WaitForService to wait for kubelet
	I0419 18:59:20.308143   14960 kubeadm.go:576] duration metric: took 1m14.7232798s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 18:59:20.308200   14960 node_conditions.go:102] verifying NodePressure condition ...
	I0419 18:59:20.308262   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes
	I0419 18:59:20.308262   14960 round_trippers.go:469] Request Headers:
	I0419 18:59:20.308262   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 18:59:20.308262   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 18:59:20.313673   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 18:59:20.313673   14960 round_trippers.go:577] Response Headers:
	I0419 18:59:20.313749   14960 round_trippers.go:580]     Audit-Id: c51e48ad-320b-427f-b68d-48c98d19d4b5
	I0419 18:59:20.313749   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 18:59:20.313749   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 18:59:20.313749   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 18:59:20.313749   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 18:59:20.313749   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 01:59:20 GMT
	I0419 18:59:20.314301   14960 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1957"},"items":[{"metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16255 chars]
	I0419 18:59:20.315722   14960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 18:59:20.315849   14960 node_conditions.go:123] node cpu capacity is 2
	I0419 18:59:20.315902   14960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 18:59:20.315902   14960 node_conditions.go:123] node cpu capacity is 2
	I0419 18:59:20.315902   14960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 18:59:20.315902   14960 node_conditions.go:123] node cpu capacity is 2
	I0419 18:59:20.315902   14960 node_conditions.go:105] duration metric: took 7.7018ms to run NodePressure ...
	I0419 18:59:20.315977   14960 start.go:240] waiting for startup goroutines ...
	I0419 18:59:20.315977   14960 start.go:245] waiting for cluster config update ...
	I0419 18:59:20.316020   14960 start.go:254] writing updated cluster config ...
	I0419 18:59:20.321504   14960 out.go:177] 
	I0419 18:59:20.324144   14960 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 18:59:20.334295   14960 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 18:59:20.334527   14960 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\config.json ...
	I0419 18:59:20.340312   14960 out.go:177] * Starting "multinode-348000-m02" worker node in "multinode-348000" cluster
	I0419 18:59:20.343001   14960 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 18:59:20.343001   14960 cache.go:56] Caching tarball of preloaded images
	I0419 18:59:20.343799   14960 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0419 18:59:20.343799   14960 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 18:59:20.344338   14960 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\config.json ...
	I0419 18:59:20.346950   14960 start.go:360] acquireMachinesLock for multinode-348000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 18:59:20.347102   14960 start.go:364] duration metric: took 76µs to acquireMachinesLock for "multinode-348000-m02"
	I0419 18:59:20.347328   14960 start.go:96] Skipping create...Using existing machine configuration
	I0419 18:59:20.347328   14960 fix.go:54] fixHost starting: m02
	I0419 18:59:20.347486   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:59:22.482592   14960 main.go:141] libmachine: [stdout =====>] : Off
	
	I0419 18:59:22.482592   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:22.482592   14960 fix.go:112] recreateIfNeeded on multinode-348000-m02: state=Stopped err=<nil>
	W0419 18:59:22.482592   14960 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 18:59:22.485353   14960 out.go:177] * Restarting existing hyperv VM for "multinode-348000-m02" ...
	I0419 18:59:22.488699   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-348000-m02
	I0419 18:59:25.551046   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:59:25.551046   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:25.551118   14960 main.go:141] libmachine: Waiting for host to start...
	I0419 18:59:25.551118   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:59:27.746071   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:59:27.746071   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:27.746319   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:59:30.267148   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:59:30.267323   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:31.281397   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:59:33.448302   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:59:33.448302   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:33.448302   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:59:35.954324   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:59:35.954718   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:36.969477   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:59:39.101528   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:59:39.101528   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:39.101528   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:59:41.601589   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:59:41.601589   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:42.602907   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:59:44.806448   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:59:44.806928   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:44.807070   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:59:47.357106   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 18:59:47.358115   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:48.359673   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:59:50.574810   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:59:50.574810   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:50.575478   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:59:53.157141   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 18:59:53.157141   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:53.157141   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:59:55.315053   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:59:55.315053   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:55.316120   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:59:57.899958   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 18:59:57.900459   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:59:57.900824   14960 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\config.json ...
	I0419 18:59:57.903342   14960 machine.go:94] provisionDockerMachine start ...
	I0419 18:59:57.903418   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:00.053036   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:00.054023   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:00.054099   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:02.665325   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:02.665325   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:02.671525   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 19:00:02.672246   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.34 22 <nil> <nil>}
	I0419 19:00:02.672246   14960 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 19:00:02.812690   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0419 19:00:02.813294   14960 buildroot.go:166] provisioning hostname "multinode-348000-m02"
	I0419 19:00:02.813294   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:04.968843   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:04.968843   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:04.969325   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:07.568901   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:07.568901   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:07.577137   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 19:00:07.577926   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.34 22 <nil> <nil>}
	I0419 19:00:07.577926   14960 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-348000-m02 && echo "multinode-348000-m02" | sudo tee /etc/hostname
	I0419 19:00:07.742489   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-348000-m02
	
	I0419 19:00:07.742618   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:09.863375   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:09.863375   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:09.863375   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:12.478404   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:12.478404   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:12.485486   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 19:00:12.485645   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.34 22 <nil> <nil>}
	I0419 19:00:12.485645   14960 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-348000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-348000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-348000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 19:00:12.646037   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 19:00:12.646037   14960 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0419 19:00:12.646037   14960 buildroot.go:174] setting up certificates
	I0419 19:00:12.646037   14960 provision.go:84] configureAuth start
	I0419 19:00:12.646037   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:14.793172   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:14.793172   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:14.794080   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:17.365754   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:17.365985   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:17.365985   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:19.463864   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:19.463864   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:19.463864   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:22.073382   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:22.073475   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:22.073475   14960 provision.go:143] copyHostCerts
	I0419 19:00:22.073756   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0419 19:00:22.074106   14960 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0419 19:00:22.074106   14960 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0419 19:00:22.074589   14960 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0419 19:00:22.075933   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0419 19:00:22.076189   14960 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0419 19:00:22.076318   14960 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0419 19:00:22.076741   14960 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0419 19:00:22.077797   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0419 19:00:22.078190   14960 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0419 19:00:22.078190   14960 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0419 19:00:22.078569   14960 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0419 19:00:22.079605   14960 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-348000-m02 san=[127.0.0.1 172.19.47.34 localhost minikube multinode-348000-m02]
	I0419 19:00:22.251286   14960 provision.go:177] copyRemoteCerts
	I0419 19:00:22.267070   14960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 19:00:22.267070   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:24.361051   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:24.361051   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:24.361575   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:26.924432   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:26.924683   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:26.924813   14960 sshutil.go:53] new ssh client: &{IP:172.19.47.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\id_rsa Username:docker}
	I0419 19:00:27.029393   14960 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7622522s)
	I0419 19:00:27.029451   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0419 19:00:27.030087   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0419 19:00:27.080733   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0419 19:00:27.080931   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0419 19:00:27.128736   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0419 19:00:27.129594   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 19:00:27.182369   14960 provision.go:87] duration metric: took 14.5362365s to configureAuth
	I0419 19:00:27.182514   14960 buildroot.go:189] setting minikube options for container-runtime
	I0419 19:00:27.183524   14960 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 19:00:27.183693   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:29.286933   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:29.287758   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:29.287897   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:31.811437   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:31.811437   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:31.820895   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 19:00:31.821699   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.34 22 <nil> <nil>}
	I0419 19:00:31.821699   14960 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0419 19:00:31.968296   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0419 19:00:31.968296   14960 buildroot.go:70] root file system type: tmpfs
	I0419 19:00:31.968830   14960 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0419 19:00:31.968830   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:34.075654   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:34.075654   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:34.075957   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:36.589896   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:36.590132   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:36.596357   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 19:00:36.596357   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.34 22 <nil> <nil>}
	I0419 19:00:36.596357   14960 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.42.24"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0419 19:00:36.761782   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.42.24
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0419 19:00:36.761928   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:38.810210   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:38.811103   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:38.811218   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:41.347653   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:41.348654   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:41.354513   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 19:00:41.354513   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.34 22 <nil> <nil>}
	I0419 19:00:41.355041   14960 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0419 19:00:43.742202   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0419 19:00:43.742202   14960 machine.go:97] duration metric: took 45.8387639s to provisionDockerMachine
	I0419 19:00:43.742202   14960 start.go:293] postStartSetup for "multinode-348000-m02" (driver="hyperv")
	I0419 19:00:43.742202   14960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 19:00:43.756195   14960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 19:00:43.756195   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:45.829676   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:45.830233   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:45.830330   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:48.407654   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:48.407978   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:48.408181   14960 sshutil.go:53] new ssh client: &{IP:172.19.47.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\id_rsa Username:docker}
	I0419 19:00:48.513231   14960 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7570266s)
	I0419 19:00:48.529082   14960 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 19:00:48.537839   14960 command_runner.go:130] > NAME=Buildroot
	I0419 19:00:48.537839   14960 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0419 19:00:48.537839   14960 command_runner.go:130] > ID=buildroot
	I0419 19:00:48.537839   14960 command_runner.go:130] > VERSION_ID=2023.02.9
	I0419 19:00:48.537839   14960 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0419 19:00:48.537839   14960 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 19:00:48.537839   14960 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0419 19:00:48.538375   14960 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0419 19:00:48.539495   14960 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> 34162.pem in /etc/ssl/certs
	I0419 19:00:48.539495   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /etc/ssl/certs/34162.pem
	I0419 19:00:48.553246   14960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 19:00:48.578189   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /etc/ssl/certs/34162.pem (1708 bytes)
	I0419 19:00:48.627075   14960 start.go:296] duration metric: took 4.8848625s for postStartSetup
	I0419 19:00:48.627075   14960 fix.go:56] duration metric: took 1m28.2795619s for fixHost
	I0419 19:00:48.627075   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:50.805935   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:50.806884   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:50.806884   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:53.447848   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:53.448572   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:53.454794   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 19:00:53.455480   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.34 22 <nil> <nil>}
	I0419 19:00:53.455480   14960 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 19:00:53.597030   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713578453.581321408
	
	I0419 19:00:53.597133   14960 fix.go:216] guest clock: 1713578453.581321408
	I0419 19:00:53.597133   14960 fix.go:229] Guest: 2024-04-19 19:00:53.581321408 -0700 PDT Remote: 2024-04-19 19:00:48.6270755 -0700 PDT m=+296.820333301 (delta=4.954245908s)
	I0419 19:00:53.597263   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:00:55.693712   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:00:55.694736   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:55.694796   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:00:58.238910   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:00:58.238910   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:00:58.245560   14960 main.go:141] libmachine: Using SSH client type: native
	I0419 19:00:58.245884   14960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x78a1c0] 0x78cda0 <nil>  [] 0s} 172.19.47.34 22 <nil> <nil>}
	I0419 19:00:58.245884   14960 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713578453
	I0419 19:00:58.390249   14960 main.go:141] libmachine: SSH cmd err, output: <nil>: Sat Apr 20 02:00:53 UTC 2024
	
	I0419 19:00:58.390302   14960 fix.go:236] clock set: Sat Apr 20 02:00:53 UTC 2024
	 (err=<nil>)
	I0419 19:00:58.390302   14960 start.go:83] releasing machines lock for "multinode-348000-m02", held for 1m38.0428837s
	I0419 19:00:58.390545   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:01:00.450003   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:01:00.450003   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:00.450117   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:01:03.040422   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:01:03.040768   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:03.044247   14960 out.go:177] * Found network options:
	I0419 19:01:03.046833   14960 out.go:177]   - NO_PROXY=172.19.42.24
	W0419 19:01:03.048991   14960 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 19:01:03.051262   14960 out.go:177]   - NO_PROXY=172.19.42.24
	W0419 19:01:03.053333   14960 proxy.go:119] fail to check proxy env: Error ip not in block
	W0419 19:01:03.054258   14960 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 19:01:03.057094   14960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 19:01:03.057094   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:01:03.067565   14960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0419 19:01:03.068567   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:01:05.208204   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:01:05.208701   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:05.208871   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:01:05.220683   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:01:05.220683   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:05.220683   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:01:07.832195   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:01:07.832195   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:07.832953   14960 sshutil.go:53] new ssh client: &{IP:172.19.47.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\id_rsa Username:docker}
	I0419 19:01:07.859035   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:01:07.859035   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:07.859982   14960 sshutil.go:53] new ssh client: &{IP:172.19.47.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\id_rsa Username:docker}
	I0419 19:01:08.053699   14960 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0419 19:01:08.053869   14960 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9966973s)
	I0419 19:01:08.053869   14960 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0419 19:01:08.053929   14960 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9853511s)
	W0419 19:01:08.054000   14960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 19:01:08.073960   14960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 19:01:08.108058   14960 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0419 19:01:08.108114   14960 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 19:01:08.108114   14960 start.go:494] detecting cgroup driver to use...
	I0419 19:01:08.108114   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 19:01:08.147428   14960 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0419 19:01:08.162147   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0419 19:01:08.197273   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0419 19:01:08.221559   14960 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0419 19:01:08.235303   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0419 19:01:08.269022   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 19:01:08.308858   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0419 19:01:08.352935   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 19:01:08.388625   14960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 19:01:08.425846   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0419 19:01:08.465683   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0419 19:01:08.501891   14960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0419 19:01:08.543670   14960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 19:01:08.563544   14960 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0419 19:01:08.578557   14960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 19:01:08.613027   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 19:01:08.842996   14960 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0419 19:01:08.882240   14960 start.go:494] detecting cgroup driver to use...
	I0419 19:01:08.898897   14960 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0419 19:01:08.928639   14960 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0419 19:01:08.928803   14960 command_runner.go:130] > [Unit]
	I0419 19:01:08.928848   14960 command_runner.go:130] > Description=Docker Application Container Engine
	I0419 19:01:08.928848   14960 command_runner.go:130] > Documentation=https://docs.docker.com
	I0419 19:01:08.928848   14960 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0419 19:01:08.928848   14960 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0419 19:01:08.928848   14960 command_runner.go:130] > StartLimitBurst=3
	I0419 19:01:08.928848   14960 command_runner.go:130] > StartLimitIntervalSec=60
	I0419 19:01:08.928848   14960 command_runner.go:130] > [Service]
	I0419 19:01:08.928848   14960 command_runner.go:130] > Type=notify
	I0419 19:01:08.928848   14960 command_runner.go:130] > Restart=on-failure
	I0419 19:01:08.928940   14960 command_runner.go:130] > Environment=NO_PROXY=172.19.42.24
	I0419 19:01:08.928940   14960 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0419 19:01:08.929007   14960 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0419 19:01:08.929045   14960 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0419 19:01:08.929045   14960 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0419 19:01:08.929103   14960 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0419 19:01:08.929103   14960 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0419 19:01:08.929130   14960 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0419 19:01:08.929186   14960 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0419 19:01:08.929186   14960 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0419 19:01:08.929186   14960 command_runner.go:130] > ExecStart=
	I0419 19:01:08.929186   14960 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0419 19:01:08.929186   14960 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0419 19:01:08.929186   14960 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0419 19:01:08.929186   14960 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0419 19:01:08.929186   14960 command_runner.go:130] > LimitNOFILE=infinity
	I0419 19:01:08.929186   14960 command_runner.go:130] > LimitNPROC=infinity
	I0419 19:01:08.929186   14960 command_runner.go:130] > LimitCORE=infinity
	I0419 19:01:08.929186   14960 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0419 19:01:08.929186   14960 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0419 19:01:08.929186   14960 command_runner.go:130] > TasksMax=infinity
	I0419 19:01:08.929186   14960 command_runner.go:130] > TimeoutStartSec=0
	I0419 19:01:08.929186   14960 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0419 19:01:08.929186   14960 command_runner.go:130] > Delegate=yes
	I0419 19:01:08.929186   14960 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0419 19:01:08.929186   14960 command_runner.go:130] > KillMode=process
	I0419 19:01:08.929186   14960 command_runner.go:130] > [Install]
	I0419 19:01:08.929186   14960 command_runner.go:130] > WantedBy=multi-user.target
	I0419 19:01:08.944507   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 19:01:08.989765   14960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 19:01:09.036757   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 19:01:09.080760   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 19:01:09.120826   14960 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0419 19:01:09.194341   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 19:01:09.221446   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 19:01:09.258347   14960 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0419 19:01:09.270335   14960 ssh_runner.go:195] Run: which cri-dockerd
	I0419 19:01:09.281338   14960 command_runner.go:130] > /usr/bin/cri-dockerd
	I0419 19:01:09.296395   14960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0419 19:01:09.317652   14960 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0419 19:01:09.369444   14960 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0419 19:01:09.591646   14960 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0419 19:01:09.791897   14960 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0419 19:01:09.792098   14960 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0419 19:01:09.842651   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 19:01:10.066054   14960 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 19:01:12.701497   14960 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.635438s)
	I0419 19:01:12.716637   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0419 19:01:12.761639   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 19:01:12.801948   14960 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0419 19:01:13.025145   14960 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0419 19:01:13.233611   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 19:01:13.454757   14960 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0419 19:01:13.502274   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 19:01:13.542691   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 19:01:13.791570   14960 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0419 19:01:13.917116   14960 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0419 19:01:13.927454   14960 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0419 19:01:13.946428   14960 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0419 19:01:13.946428   14960 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0419 19:01:13.946428   14960 command_runner.go:130] > Device: 0,22	Inode: 860         Links: 1
	I0419 19:01:13.946428   14960 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0419 19:01:13.946428   14960 command_runner.go:130] > Access: 2024-04-20 02:01:13.806811980 +0000
	I0419 19:01:13.946428   14960 command_runner.go:130] > Modify: 2024-04-20 02:01:13.806811980 +0000
	I0419 19:01:13.946428   14960 command_runner.go:130] > Change: 2024-04-20 02:01:13.810812117 +0000
	I0419 19:01:13.946428   14960 command_runner.go:130] >  Birth: -
	I0419 19:01:13.946428   14960 start.go:562] Will wait 60s for crictl version
	I0419 19:01:13.960453   14960 ssh_runner.go:195] Run: which crictl
	I0419 19:01:13.967237   14960 command_runner.go:130] > /usr/bin/crictl
	I0419 19:01:13.981372   14960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 19:01:14.042136   14960 command_runner.go:130] > Version:  0.1.0
	I0419 19:01:14.042270   14960 command_runner.go:130] > RuntimeName:  docker
	I0419 19:01:14.042270   14960 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0419 19:01:14.042270   14960 command_runner.go:130] > RuntimeApiVersion:  v1
	I0419 19:01:14.042373   14960 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0419 19:01:14.052180   14960 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 19:01:14.091495   14960 command_runner.go:130] > 26.0.1
	I0419 19:01:14.103244   14960 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 19:01:14.137426   14960 command_runner.go:130] > 26.0.1
	I0419 19:01:14.145035   14960 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0419 19:01:14.147657   14960 out.go:177]   - env NO_PROXY=172.19.42.24
	I0419 19:01:14.149658   14960 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0419 19:01:14.154656   14960 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0419 19:01:14.154656   14960 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0419 19:01:14.154656   14960 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0419 19:01:14.154656   14960 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8c:b9:25 Flags:up|broadcast|multicast|running}
	I0419 19:01:14.157661   14960 ip.go:210] interface addr: fe80::ce04:318e:a1d8:4460/64
	I0419 19:01:14.157661   14960 ip.go:210] interface addr: 172.19.32.1/20
	I0419 19:01:14.171677   14960 ssh_runner.go:195] Run: grep 172.19.32.1	host.minikube.internal$ /etc/hosts
	I0419 19:01:14.179110   14960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.32.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 19:01:14.202666   14960 mustload.go:65] Loading cluster: multinode-348000
	I0419 19:01:14.203401   14960 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 19:01:14.204153   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 19:01:16.329740   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:01:16.330191   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:16.330191   14960 host.go:66] Checking if "multinode-348000" exists ...
	I0419 19:01:16.330863   14960 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000 for IP: 172.19.47.34
	I0419 19:01:16.330863   14960 certs.go:194] generating shared ca certs ...
	I0419 19:01:16.330863   14960 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 19:01:16.331414   14960 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0419 19:01:16.331666   14960 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0419 19:01:16.331666   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 19:01:16.332342   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0419 19:01:16.332530   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 19:01:16.332769   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 19:01:16.332769   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem (1338 bytes)
	W0419 19:01:16.333349   14960 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416_empty.pem, impossibly tiny 0 bytes
	I0419 19:01:16.333582   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0419 19:01:16.333793   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0419 19:01:16.333793   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0419 19:01:16.333793   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0419 19:01:16.335039   14960 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem (1708 bytes)
	I0419 19:01:16.335270   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 19:01:16.335504   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem -> /usr/share/ca-certificates/3416.pem
	I0419 19:01:16.335693   14960 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem -> /usr/share/ca-certificates/34162.pem
	I0419 19:01:16.335693   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 19:01:16.399108   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 19:01:16.450867   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 19:01:16.506333   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 19:01:16.556601   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 19:01:16.614342   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3416.pem --> /usr/share/ca-certificates/3416.pem (1338 bytes)
	I0419 19:01:16.661285   14960 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\34162.pem --> /usr/share/ca-certificates/34162.pem (1708 bytes)
	I0419 19:01:16.733715   14960 ssh_runner.go:195] Run: openssl version
	I0419 19:01:16.745380   14960 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0419 19:01:16.760333   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 19:01:16.798285   14960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 19:01:16.806669   14960 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 20 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I0419 19:01:16.806669   14960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 20 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I0419 19:01:16.821616   14960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 19:01:16.830618   14960 command_runner.go:130] > b5213941
	I0419 19:01:16.844377   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 19:01:16.879247   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3416.pem && ln -fs /usr/share/ca-certificates/3416.pem /etc/ssl/certs/3416.pem"
	I0419 19:01:16.914700   14960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3416.pem
	I0419 19:01:16.923267   14960 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 20 00:10 /usr/share/ca-certificates/3416.pem
	I0419 19:01:16.924204   14960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:10 /usr/share/ca-certificates/3416.pem
	I0419 19:01:16.937060   14960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3416.pem
	I0419 19:01:16.946404   14960 command_runner.go:130] > 51391683
	I0419 19:01:16.960456   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3416.pem /etc/ssl/certs/51391683.0"
	I0419 19:01:16.997669   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34162.pem && ln -fs /usr/share/ca-certificates/34162.pem /etc/ssl/certs/34162.pem"
	I0419 19:01:17.033682   14960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34162.pem
	I0419 19:01:17.041522   14960 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 20 00:10 /usr/share/ca-certificates/34162.pem
	I0419 19:01:17.041522   14960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:10 /usr/share/ca-certificates/34162.pem
	I0419 19:01:17.055348   14960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34162.pem
	I0419 19:01:17.065520   14960 command_runner.go:130] > 3ec20f2e
	I0419 19:01:17.079279   14960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34162.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 19:01:17.116414   14960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 19:01:17.123098   14960 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 19:01:17.124706   14960 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 19:01:17.124920   14960 kubeadm.go:928] updating node {m02 172.19.47.34 8443 v1.30.0 docker false true} ...
	I0419 19:01:17.125141   14960 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-348000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.47.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-348000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 19:01:17.138352   14960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 19:01:17.160399   14960 command_runner.go:130] > kubeadm
	I0419 19:01:17.160399   14960 command_runner.go:130] > kubectl
	I0419 19:01:17.160399   14960 command_runner.go:130] > kubelet
	I0419 19:01:17.160399   14960 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 19:01:17.174019   14960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0419 19:01:17.194262   14960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0419 19:01:17.229251   14960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 19:01:17.279087   14960 ssh_runner.go:195] Run: grep 172.19.42.24	control-plane.minikube.internal$ /etc/hosts
	I0419 19:01:17.286304   14960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.42.24	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 19:01:17.324868   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 19:01:17.536268   14960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 19:01:17.572578   14960 host.go:66] Checking if "multinode-348000" exists ...
	I0419 19:01:17.573436   14960 start.go:316] joinCluster: &{Name:multinode-348000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-348000 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.42.24 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.47.34 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.37.59 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisione
r:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 19:01:17.573651   14960 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.19.47.34 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0419 19:01:17.573725   14960 host.go:66] Checking if "multinode-348000-m02" exists ...
	I0419 19:01:17.574300   14960 mustload.go:65] Loading cluster: multinode-348000
	I0419 19:01:17.574781   14960 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 19:01:17.575387   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 19:01:19.772194   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:01:19.772194   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:19.772194   14960 host.go:66] Checking if "multinode-348000" exists ...
	I0419 19:01:19.773657   14960 api_server.go:166] Checking apiserver status ...
	I0419 19:01:19.792360   14960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 19:01:19.792360   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 19:01:21.959590   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:01:21.959590   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:21.959590   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 19:01:24.565929   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 19:01:24.565929   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:24.566380   14960 sshutil.go:53] new ssh client: &{IP:172.19.42.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\id_rsa Username:docker}
	I0419 19:01:24.680711   14960 command_runner.go:130] > 1877
	I0419 19:01:24.680711   14960 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.8883404s)
	I0419 19:01:24.694244   14960 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1877/cgroup
	W0419 19:01:24.714312   14960 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1877/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 19:01:24.728594   14960 ssh_runner.go:195] Run: ls
	I0419 19:01:24.741144   14960 api_server.go:253] Checking apiserver healthz at https://172.19.42.24:8443/healthz ...
	I0419 19:01:24.749114   14960 api_server.go:279] https://172.19.42.24:8443/healthz returned 200:
	ok
	I0419 19:01:24.762494   14960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl drain multinode-348000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0419 19:01:24.921133   14960 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-s98rh, kube-system/kube-proxy-bjv9b
	I0419 19:01:27.962842   14960 command_runner.go:130] > node/multinode-348000-m02 cordoned
	I0419 19:01:27.962842   14960 command_runner.go:130] > pod "busybox-fc5497c4f-2d5hs" has DeletionTimestamp older than 1 seconds, skipping
	I0419 19:01:27.962842   14960 command_runner.go:130] > node/multinode-348000-m02 drained
	I0419 19:01:27.962842   14960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl drain multinode-348000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.200341s)
	I0419 19:01:27.962842   14960 node.go:128] successfully drained node "multinode-348000-m02"
	I0419 19:01:27.962842   14960 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0419 19:01:27.962842   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 19:01:30.126646   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:01:30.126646   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:30.127588   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 19:01:32.772503   14960 main.go:141] libmachine: [stdout =====>] : 172.19.47.34
	
	I0419 19:01:32.772634   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:32.772777   14960 sshutil.go:53] new ssh client: &{IP:172.19.47.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\id_rsa Username:docker}
	I0419 19:01:33.271059   14960 command_runner.go:130] ! W0420 02:01:33.258193    1546 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0419 19:01:33.892281   14960 command_runner.go:130] ! W0420 02:01:33.879473    1546 cleanupnode.go:106] [reset] Failed to remove containers: failed to stop running pod a8f6b8169c72cdcce217a8588db0863a6d44839a0a40fadcb1e83f6c0b93ade3: output: E0420 02:01:33.527603    1582 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-2d5hs_default\" network: cni config uninitialized" podSandboxID="a8f6b8169c72cdcce217a8588db0863a6d44839a0a40fadcb1e83f6c0b93ade3"
	I0419 19:01:33.892334   14960 command_runner.go:130] ! time="2024-04-20T02:01:33Z" level=fatal msg="stopping the pod sandbox \"a8f6b8169c72cdcce217a8588db0863a6d44839a0a40fadcb1e83f6c0b93ade3\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-2d5hs_default\" network: cni config uninitialized"
	I0419 19:01:33.892334   14960 command_runner.go:130] ! : exit status 1
	I0419 19:01:33.919921   14960 command_runner.go:130] > [preflight] Running pre-flight checks
	I0419 19:01:33.920035   14960 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0419 19:01:33.920035   14960 command_runner.go:130] > [reset] Stopping the kubelet service
	I0419 19:01:33.920035   14960 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0419 19:01:33.920114   14960 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0419 19:01:33.920114   14960 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0419 19:01:33.920114   14960 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0419 19:01:33.920114   14960 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0419 19:01:33.920114   14960 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0419 19:01:33.920114   14960 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0419 19:01:33.920114   14960 command_runner.go:130] > to reset your system's IPVS tables.
	I0419 19:01:33.920114   14960 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0419 19:01:33.920114   14960 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0419 19:01:33.920114   14960 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.9572599s)
	I0419 19:01:33.920114   14960 node.go:155] successfully reset node "multinode-348000-m02"
	I0419 19:01:33.921684   14960 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 19:01:33.921751   14960 kapi.go:59] client config for multinode-348000: &rest.Config{Host:"https://172.19.42.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-348000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-348000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c35620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 19:01:33.923072   14960 cert_rotation.go:137] Starting client certificate rotation controller
	I0419 19:01:33.923889   14960 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0419 19:01:33.924013   14960 round_trippers.go:463] DELETE https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:33.924048   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:33.924048   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:33.924079   14960 round_trippers.go:473]     Content-Type: application/json
	I0419 19:01:33.924079   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:33.941110   14960 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0419 19:01:33.941110   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:33.941110   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:33.941191   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:33.941191   14960 round_trippers.go:580]     Content-Length: 171
	I0419 19:01:33.941191   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:33 GMT
	I0419 19:01:33.941290   14960 round_trippers.go:580]     Audit-Id: 1d74b676-1386-4baf-a7a5-6c73d15d4038
	I0419 19:01:33.941290   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:33.941290   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:33.941340   14960 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-348000-m02","kind":"nodes","uid":"55e3ec83-8d61-4351-9f3f-477b2ef05608"}}
	I0419 19:01:33.941340   14960 node.go:180] successfully deleted node "multinode-348000-m02"
	I0419 19:01:33.941440   14960 start.go:333] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.19.47.34 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0419 19:01:33.941508   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0419 19:01:33.941585   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 19:01:36.054151   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:01:36.054151   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:36.054293   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 19:01:38.625022   14960 main.go:141] libmachine: [stdout =====>] : 172.19.42.24
	
	I0419 19:01:38.626060   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:38.626313   14960 sshutil.go:53] new ssh client: &{IP:172.19.42.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\id_rsa Username:docker}
	I0419 19:01:38.824137   14960 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token lulnn1.bllunk0142pxcua8 --discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 
	I0419 19:01:38.824137   14960 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.882619s)
	I0419 19:01:38.824273   14960 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.19.47.34 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0419 19:01:38.824312   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lulnn1.bllunk0142pxcua8 --discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-348000-m02"
	I0419 19:01:39.058259   14960 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0419 19:01:40.452720   14960 command_runner.go:130] > [preflight] Running pre-flight checks
	I0419 19:01:40.452866   14960 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0419 19:01:40.452866   14960 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0419 19:01:40.452866   14960 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 19:01:40.452866   14960 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 19:01:40.452866   14960 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0419 19:01:40.452929   14960 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0419 19:01:40.452929   14960 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002344373s
	I0419 19:01:40.452987   14960 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0419 19:01:40.452987   14960 command_runner.go:130] > This node has joined the cluster:
	I0419 19:01:40.453015   14960 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0419 19:01:40.453015   14960 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0419 19:01:40.453015   14960 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0419 19:01:40.453087   14960 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lulnn1.bllunk0142pxcua8 --discovery-token-ca-cert-hash sha256:1bf61395fefa1828a907c290a6fa14b45849714fe0b0b8f04ce869ac89269a01 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-348000-m02": (1.6287717s)
	I0419 19:01:40.453267   14960 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0419 19:01:40.678777   14960 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0419 19:01:40.900769   14960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-348000-m02 minikube.k8s.io/updated_at=2024_04_19T19_01_40_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=multinode-348000 minikube.k8s.io/primary=false
	I0419 19:01:41.055068   14960 command_runner.go:130] > node/multinode-348000-m02 labeled
	I0419 19:01:41.055068   14960 start.go:318] duration metric: took 23.4815828s to joinCluster
	I0419 19:01:41.055068   14960 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.19.47.34 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0419 19:01:41.063162   14960 out.go:177] * Verifying Kubernetes components...
	I0419 19:01:41.059055   14960 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 19:01:41.080884   14960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 19:01:41.300370   14960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 19:01:41.331316   14960 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 19:01:41.332113   14960 kapi.go:59] client config for multinode-348000: &rest.Config{Host:"https://172.19.42.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-348000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-348000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c35620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 19:01:41.333067   14960 node_ready.go:35] waiting up to 6m0s for node "multinode-348000-m02" to be "Ready" ...
	I0419 19:01:41.333216   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:41.333216   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:41.333216   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:41.333216   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:41.337635   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 19:01:41.337706   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:41.337706   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:41.337706   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:41.337706   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:41.337791   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:41.337791   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:41 GMT
	I0419 19:01:41.337791   14960 round_trippers.go:580]     Audit-Id: 66e04f6c-f89c-48a7-aa9b-f0859b332d37
	I0419 19:01:41.338090   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2104","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0419 19:01:41.834693   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:41.834765   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:41.834765   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:41.834765   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:41.838155   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:41.838709   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:41.838709   14960 round_trippers.go:580]     Audit-Id: 0d452dcb-2520-4e2c-a48f-d3784908f2bc
	I0419 19:01:41.838709   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:41.838709   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:41.838709   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:41.838820   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:41.838820   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:41 GMT
	I0419 19:01:41.839012   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2104","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0419 19:01:42.338826   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:42.338887   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:42.338887   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:42.338887   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:42.346387   14960 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 19:01:42.346387   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:42.346387   14960 round_trippers.go:580]     Audit-Id: 23da4afe-332a-4a27-81e6-af4580e224e9
	I0419 19:01:42.346387   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:42.346387   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:42.346387   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:42.346387   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:42.346387   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:42 GMT
	I0419 19:01:42.347351   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2104","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0419 19:01:42.836832   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:42.836832   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:42.836832   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:42.836832   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:42.843828   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 19:01:42.843828   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:42.843828   14960 round_trippers.go:580]     Audit-Id: 4b28280e-c168-4a6d-8a76-5320f2bce41e
	I0419 19:01:42.843828   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:42.843828   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:42.843828   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:42.843828   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:42.843828   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:42 GMT
	I0419 19:01:42.843828   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2104","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0419 19:01:43.346845   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:43.346913   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:43.346913   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:43.346913   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:43.350324   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:43.351221   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:43.351221   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:43.351221   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:43.351303   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:43.351303   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:43 GMT
	I0419 19:01:43.351326   14960 round_trippers.go:580]     Audit-Id: 2d042065-5fe2-4aac-ae0f-1879cb2ee98b
	I0419 19:01:43.351326   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:43.351642   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2104","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0419 19:01:43.351759   14960 node_ready.go:53] node "multinode-348000-m02" has status "Ready":"False"
	I0419 19:01:43.838176   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:43.838239   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:43.838287   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:43.838287   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:43.845809   14960 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 19:01:43.845809   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:43.845809   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:43.845809   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:43.845809   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:43.845809   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:43.845809   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:43 GMT
	I0419 19:01:43.845809   14960 round_trippers.go:580]     Audit-Id: 5b6c0e89-4b9f-4e1e-b63c-45ba9f620b06
	I0419 19:01:43.846457   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:44.341171   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:44.341171   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:44.341171   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:44.341171   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:44.348316   14960 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 19:01:44.348316   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:44.348316   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:44.348316   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:44.348316   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:44.348316   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:44 GMT
	I0419 19:01:44.348316   14960 round_trippers.go:580]     Audit-Id: 3e0924df-697a-4fcf-8e5c-08800e1ddff8
	I0419 19:01:44.348316   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:44.348316   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:44.840070   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:44.840220   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:44.840220   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:44.840303   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:44.844591   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 19:01:44.844591   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:44.844591   14960 round_trippers.go:580]     Audit-Id: e0ff0c7a-45c3-4139-8873-86f79a227ade
	I0419 19:01:44.844591   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:44.845220   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:44.845220   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:44.845220   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:44.845270   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:44 GMT
	I0419 19:01:44.845485   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:45.337160   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:45.337160   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:45.337160   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:45.337160   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:45.345619   14960 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 19:01:45.345619   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:45.345619   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:45.345619   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:45.345619   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:45 GMT
	I0419 19:01:45.345619   14960 round_trippers.go:580]     Audit-Id: c9ee900d-f027-4b4b-b47a-d928341aefc4
	I0419 19:01:45.345619   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:45.346017   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:45.346561   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:45.839469   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:45.839580   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:45.839580   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:45.839580   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:45.843114   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:45.843114   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:45.843114   14960 round_trippers.go:580]     Audit-Id: 94242ce1-9def-442e-a607-ccda8bb10bed
	I0419 19:01:45.843114   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:45.843114   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:45.843114   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:45.843114   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:45.843114   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:45 GMT
	I0419 19:01:45.843481   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:45.844025   14960 node_ready.go:53] node "multinode-348000-m02" has status "Ready":"False"
	I0419 19:01:46.340968   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:46.341026   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:46.341026   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:46.341026   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:46.345623   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 19:01:46.345717   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:46.345717   14960 round_trippers.go:580]     Audit-Id: f2532663-d9cb-4dff-933c-c87ef1778a1f
	I0419 19:01:46.345717   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:46.345717   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:46.345717   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:46.345717   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:46.345717   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:46 GMT
	I0419 19:01:46.345908   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:46.833622   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:46.833622   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:46.833698   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:46.833698   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:46.838617   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 19:01:46.838617   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:46.838617   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:46.838617   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:46.838617   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:46.839042   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:46 GMT
	I0419 19:01:46.839042   14960 round_trippers.go:580]     Audit-Id: 548d8383-10c0-4a60-baa3-7f4a28fb91b3
	I0419 19:01:46.839042   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:46.839106   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:47.334238   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:47.334315   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:47.334315   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:47.334315   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:47.338183   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:47.338183   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:47.338818   14960 round_trippers.go:580]     Audit-Id: 1fbae893-1952-42ae-bcf9-4f77dbb6dc4a
	I0419 19:01:47.338818   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:47.338818   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:47.338818   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:47.338818   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:47.338818   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:47 GMT
	I0419 19:01:47.338990   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:47.834984   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:47.835063   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:47.835063   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:47.835063   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:47.841019   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 19:01:47.841019   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:47.841019   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:47.841019   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:47.841019   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:47 GMT
	I0419 19:01:47.841019   14960 round_trippers.go:580]     Audit-Id: d1d49fdb-64d0-4024-9496-4daf13ceea8f
	I0419 19:01:47.841019   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:47.841019   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:47.841555   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:48.336497   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:48.336735   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:48.336735   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:48.336735   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:48.339794   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:48.340569   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:48.340569   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:48.340569   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:48.340569   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:48 GMT
	I0419 19:01:48.340569   14960 round_trippers.go:580]     Audit-Id: c1059c42-86d1-4643-a47d-cd9285b13341
	I0419 19:01:48.340569   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:48.340569   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:48.340741   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:48.340741   14960 node_ready.go:53] node "multinode-348000-m02" has status "Ready":"False"
	I0419 19:01:48.834374   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:48.834374   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:48.834374   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:48.834374   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:48.838010   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:48.838010   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:48.838010   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:48.838010   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:48.838010   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:48 GMT
	I0419 19:01:48.838010   14960 round_trippers.go:580]     Audit-Id: 1c60fd3b-22be-44c1-9567-490dd33e5fb2
	I0419 19:01:48.838010   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:48.838336   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:48.838624   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2121","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0419 19:01:49.345940   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:49.345940   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.345940   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.345940   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.350590   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 19:01:49.350914   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.350914   14960 round_trippers.go:580]     Audit-Id: 2f82e4f6-daeb-4eb6-8ed2-0a0c68ac3d64
	I0419 19:01:49.350914   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.350914   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.350914   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.350914   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.350914   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.351149   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2135","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0419 19:01:49.351691   14960 node_ready.go:49] node "multinode-348000-m02" has status "Ready":"True"
	I0419 19:01:49.351691   14960 node_ready.go:38] duration metric: took 8.0186071s for node "multinode-348000-m02" to be "Ready" ...
	I0419 19:01:49.351691   14960 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 19:01:49.351816   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods
	I0419 19:01:49.351922   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.351922   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.351922   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.357090   14960 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 19:01:49.357090   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.357090   14960 round_trippers.go:580]     Audit-Id: 3ca6cfc8-a637-43d9-80c9-acf9e9398fed
	I0419 19:01:49.357579   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.357579   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.357579   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.357579   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.357579   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.359726   14960 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2137"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1944","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86034 chars]
	I0419 19:01:49.363493   14960 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:49.363493   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7w477
	I0419 19:01:49.363493   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.363493   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.363493   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.367195   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:49.367195   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.367195   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.367195   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.367195   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.367195   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.367195   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.367195   14960 round_trippers.go:580]     Audit-Id: f9f5efc5-051b-411b-b390-d5a07dfd1655
	I0419 19:01:49.367680   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7w477","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"895ddde9-466d-4abf-b6f4-594847b26c6c","resourceVersion":"1944","creationTimestamp":"2024-04-20T01:35:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"31feef66-8f5d-41da-99b9-b410825cc1b4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31feef66-8f5d-41da-99b9-b410825cc1b4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6786 chars]
	I0419 19:01:49.368383   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 19:01:49.368383   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.368383   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.368433   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.371250   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 19:01:49.371250   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.371250   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.371250   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.371250   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.371250   14960 round_trippers.go:580]     Audit-Id: 98f36e51-1dac-4a71-a229-f685771b545b
	I0419 19:01:49.371250   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.371250   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.372422   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 19:01:49.372475   14960 pod_ready.go:92] pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace has status "Ready":"True"
	I0419 19:01:49.372475   14960 pod_ready.go:81] duration metric: took 8.9819ms for pod "coredns-7db6d8ff4d-7w477" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:49.372475   14960 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:49.372475   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-348000
	I0419 19:01:49.372475   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.372475   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.372475   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.376623   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:49.376680   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.376680   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.376680   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.376680   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.376680   14960 round_trippers.go:580]     Audit-Id: a655482b-dcbc-4e08-831f-f9a829493409
	I0419 19:01:49.376680   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.376816   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.376863   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-348000","namespace":"kube-system","uid":"33702588-cdf3-4577-b18d-18415cca2c25","resourceVersion":"1836","creationTimestamp":"2024-04-20T01:58:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.42.24:2379","kubernetes.io/config.hash":"c0cfa3da6a3913c3e67500f6c3e9d72b","kubernetes.io/config.mirror":"c0cfa3da6a3913c3e67500f6c3e9d72b","kubernetes.io/config.seen":"2024-04-20T01:57:55.099346749Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:58:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6149 chars]
	I0419 19:01:49.377407   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 19:01:49.377407   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.377407   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.377407   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.381033   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:49.381033   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.381033   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.381033   14960 round_trippers.go:580]     Audit-Id: adef41ac-4d59-4d1c-9d43-4c2f73229310
	I0419 19:01:49.381033   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.381033   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.381033   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.381033   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.381800   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 19:01:49.381800   14960 pod_ready.go:92] pod "etcd-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 19:01:49.381800   14960 pod_ready.go:81] duration metric: took 9.325ms for pod "etcd-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:49.381800   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:49.381800   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-348000
	I0419 19:01:49.382355   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.382355   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.382422   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.385941   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:49.385941   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.385941   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.385941   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.385941   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.385941   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.386740   14960 round_trippers.go:580]     Audit-Id: 2c195bc7-d84e-4c7f-98ef-27af298a02f6
	I0419 19:01:49.386740   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.386974   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-348000","namespace":"kube-system","uid":"13adbf1b-6c17-47a9-951d-2481680a47bd","resourceVersion":"1823","creationTimestamp":"2024-04-20T01:58:01Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.42.24:8443","kubernetes.io/config.hash":"af7a3c9321ace7e2a933260472b90113","kubernetes.io/config.mirror":"af7a3c9321ace7e2a933260472b90113","kubernetes.io/config.seen":"2024-04-20T01:57:55.026086199Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:58:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7685 chars]
	I0419 19:01:49.387536   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 19:01:49.387536   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.387536   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.387608   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.389803   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 19:01:49.389803   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.389803   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.389803   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.389803   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.389803   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.389803   14960 round_trippers.go:580]     Audit-Id: d637f7f2-9a30-474d-bd31-d40f71eb0cef
	I0419 19:01:49.389803   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.389803   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 19:01:49.391759   14960 pod_ready.go:92] pod "kube-apiserver-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 19:01:49.391817   14960 pod_ready.go:81] duration metric: took 10.0173ms for pod "kube-apiserver-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:49.391817   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:49.391933   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-348000
	I0419 19:01:49.391933   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.391933   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.391933   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.395098   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:49.395098   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.395098   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.395098   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.395098   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.395098   14960 round_trippers.go:580]     Audit-Id: 7f468c5c-827e-4301-87bb-c2cbe94d6257
	I0419 19:01:49.395098   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.395098   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.395517   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-348000","namespace":"kube-system","uid":"299bb088-9795-4452-87a8-5e96bcacedde","resourceVersion":"1829","creationTimestamp":"2024-04-20T01:35:08Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"30aa2729d0c65b9f89e1ae2d151edd9b","kubernetes.io/config.mirror":"30aa2729d0c65b9f89e1ae2d151edd9b","kubernetes.io/config.seen":"2024-04-20T01:35:08.321898260Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0419 19:01:49.396243   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 19:01:49.396243   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.396243   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.396243   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.398549   14960 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 19:01:49.398549   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.398549   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.398549   14960 round_trippers.go:580]     Audit-Id: fc959da9-7795-49a2-b1ec-b182563f5705
	I0419 19:01:49.398549   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.399314   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.399314   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.399314   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.399607   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 19:01:49.399776   14960 pod_ready.go:92] pod "kube-controller-manager-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 19:01:49.399776   14960 pod_ready.go:81] duration metric: took 7.9587ms for pod "kube-controller-manager-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:49.399776   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2jjsq" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:49.548882   14960 request.go:629] Waited for 149.1059ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2jjsq
	I0419 19:01:49.548882   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2jjsq
	I0419 19:01:49.548882   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.548882   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.548882   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.553533   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 19:01:49.553533   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.553533   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.553533   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.553533   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.553533   14960 round_trippers.go:580]     Audit-Id: 6d9624a1-a9f9-4ea9-8b3d-162112f9c72a
	I0419 19:01:49.553533   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.553533   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.554222   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2jjsq","generateName":"kube-proxy-","namespace":"kube-system","uid":"f9666ab7-0d1f-4800-b979-6e38fecdc518","resourceVersion":"1708","creationTimestamp":"2024-04-20T01:42:52Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:42:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0419 19:01:49.751680   14960 request.go:629] Waited for 196.735ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m03
	I0419 19:01:49.751902   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m03
	I0419 19:01:49.751999   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.751999   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.752053   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.759773   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:49.759866   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.759866   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.759933   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.759933   14960 round_trippers.go:580]     Audit-Id: 0b602dda-32d4-48c8-a880-e24545726ec5
	I0419 19:01:49.759933   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.759933   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.759933   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.760161   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m03","uid":"08bfca2d-b382-4052-a5b6-0a78bee7caef","resourceVersion":"1871","creationTimestamp":"2024-04-20T01:53:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T18_53_29_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:53:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4398 chars]
	I0419 19:01:49.760269   14960 pod_ready.go:97] node "multinode-348000-m03" hosting pod "kube-proxy-2jjsq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000-m03" has status "Ready":"Unknown"
	I0419 19:01:49.760817   14960 pod_ready.go:81] duration metric: took 361.0405ms for pod "kube-proxy-2jjsq" in "kube-system" namespace to be "Ready" ...
	E0419 19:01:49.760817   14960 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-348000-m03" hosting pod "kube-proxy-2jjsq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-348000-m03" has status "Ready":"Unknown"
	I0419 19:01:49.760817   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bjv9b" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:49.954183   14960 request.go:629] Waited for 193.1754ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bjv9b
	I0419 19:01:49.954458   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bjv9b
	I0419 19:01:49.954458   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:49.954458   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:49.954458   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:49.958169   14960 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 19:01:49.958169   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:49.958169   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:49.958169   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:49 GMT
	I0419 19:01:49.958169   14960 round_trippers.go:580]     Audit-Id: 295b34b8-91d4-4588-9356-40f2469ffd00
	I0419 19:01:49.958169   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:49.958169   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:49.958169   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:49.960223   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bjv9b","generateName":"kube-proxy-","namespace":"kube-system","uid":"3e909d14-543a-4734-8c17-7e2b8188553d","resourceVersion":"2116","creationTimestamp":"2024-04-20T01:38:18Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0419 19:01:50.157140   14960 request.go:629] Waited for 195.6834ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:50.157140   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000-m02
	I0419 19:01:50.157140   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:50.157140   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:50.157140   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:50.161738   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 19:01:50.161738   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:50.161738   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:50.161738   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:50.161738   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:50.161995   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:50 GMT
	I0419 19:01:50.161995   14960 round_trippers.go:580]     Audit-Id: 1cf63281-c046-49fe-ba39-ac73ff5f9bd6
	I0419 19:01:50.161995   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:50.162265   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000-m02","uid":"889ea669-0e6f-4959-b6c0-7772795aed91","resourceVersion":"2135","creationTimestamp":"2024-04-20T02:01:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_19T19_01_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-20T02:01:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0419 19:01:50.162739   14960 pod_ready.go:92] pod "kube-proxy-bjv9b" in "kube-system" namespace has status "Ready":"True"
	I0419 19:01:50.162739   14960 pod_ready.go:81] duration metric: took 401.9205ms for pod "kube-proxy-bjv9b" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:50.162739   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kj76x" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:50.346335   14960 request.go:629] Waited for 183.1332ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kj76x
	I0419 19:01:50.346412   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kj76x
	I0419 19:01:50.346492   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:50.346492   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:50.346492   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:50.354744   14960 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 19:01:50.355763   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:50.355763   14960 round_trippers.go:580]     Audit-Id: e1f21d5b-ad88-407d-9210-0ed3613da2ca
	I0419 19:01:50.355763   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:50.355763   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:50.355763   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:50.355763   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:50.355763   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:50 GMT
	I0419 19:01:50.355763   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kj76x","generateName":"kube-proxy-","namespace":"kube-system","uid":"274342c4-c21f-4279-b0ea-743d8e2c1463","resourceVersion":"1750","creationTimestamp":"2024-04-20T01:35:22Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7a04960-7464-436d-9cc4-e19df30d0d8b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7a04960-7464-436d-9cc4-e19df30d0d8b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0419 19:01:50.549756   14960 request.go:629] Waited for 193.2869ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 19:01:50.550059   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 19:01:50.550059   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:50.550216   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:50.550216   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:50.556750   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 19:01:50.556750   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:50.556750   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:50.556750   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:50.556750   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:50 GMT
	I0419 19:01:50.556750   14960 round_trippers.go:580]     Audit-Id: a37d225f-38b9-49da-b605-7e1f17b98f91
	I0419 19:01:50.556750   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:50.556750   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:50.557477   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 19:01:50.557532   14960 pod_ready.go:92] pod "kube-proxy-kj76x" in "kube-system" namespace has status "Ready":"True"
	I0419 19:01:50.557532   14960 pod_ready.go:81] duration metric: took 394.7928ms for pod "kube-proxy-kj76x" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:50.557532   14960 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:50.754765   14960 request.go:629] Waited for 196.6075ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348000
	I0419 19:01:50.754765   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-348000
	I0419 19:01:50.754765   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:50.755000   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:50.755000   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:50.761472   14960 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 19:01:50.761472   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:50.761472   14960 round_trippers.go:580]     Audit-Id: 46d191e6-cfc8-48b4-a234-f1551e962def
	I0419 19:01:50.761472   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:50.761472   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:50.761472   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:50.761472   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:50.761472   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:50 GMT
	I0419 19:01:50.762447   14960 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-348000","namespace":"kube-system","uid":"000cfafe-a513-4738-9de2-3c25244b72be","resourceVersion":"1824","creationTimestamp":"2024-04-20T01:35:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"92813b2aed63b63058d3fd06709fa24e","kubernetes.io/config.mirror":"92813b2aed63b63058d3fd06709fa24e","kubernetes.io/config.seen":"2024-04-20T01:35:08.321899460Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-20T01:35:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0419 19:01:50.958186   14960 request.go:629] Waited for 195.1212ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 19:01:50.958432   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes/multinode-348000
	I0419 19:01:50.958506   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:50.958506   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:50.958530   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:50.966027   14960 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 19:01:50.966027   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:50.966027   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:50.966027   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:50.966027   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:50.966027   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:50 GMT
	I0419 19:01:50.966027   14960 round_trippers.go:580]     Audit-Id: 8fcc1a33-8891-4447-9ca2-2e5d82fc4890
	I0419 19:01:50.966027   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:50.966027   14960 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-20T01:35:05Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0419 19:01:50.967345   14960 pod_ready.go:92] pod "kube-scheduler-multinode-348000" in "kube-system" namespace has status "Ready":"True"
	I0419 19:01:50.967345   14960 pod_ready.go:81] duration metric: took 409.8114ms for pod "kube-scheduler-multinode-348000" in "kube-system" namespace to be "Ready" ...
	I0419 19:01:50.967345   14960 pod_ready.go:38] duration metric: took 1.6156507s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 19:01:50.967345   14960 system_svc.go:44] waiting for kubelet service to be running ....
	I0419 19:01:50.985273   14960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 19:01:51.012332   14960 system_svc.go:56] duration metric: took 44.9607ms WaitForService to wait for kubelet
	I0419 19:01:51.012332   14960 kubeadm.go:576] duration metric: took 9.9572433s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 19:01:51.012332   14960 node_conditions.go:102] verifying NodePressure condition ...
	I0419 19:01:51.146229   14960 request.go:629] Waited for 133.7259ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.42.24:8443/api/v1/nodes
	I0419 19:01:51.146549   14960 round_trippers.go:463] GET https://172.19.42.24:8443/api/v1/nodes
	I0419 19:01:51.146549   14960 round_trippers.go:469] Request Headers:
	I0419 19:01:51.146549   14960 round_trippers.go:473]     Accept: application/json, */*
	I0419 19:01:51.146549   14960 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0419 19:01:51.151158   14960 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 19:01:51.151158   14960 round_trippers.go:577] Response Headers:
	I0419 19:01:51.151158   14960 round_trippers.go:580]     Date: Sat, 20 Apr 2024 02:01:51 GMT
	I0419 19:01:51.151633   14960 round_trippers.go:580]     Audit-Id: 1fe3e0d5-02c4-4ea7-b6c2-3ea2d67236ac
	I0419 19:01:51.151633   14960 round_trippers.go:580]     Cache-Control: no-cache, private
	I0419 19:01:51.151633   14960 round_trippers.go:580]     Content-Type: application/json
	I0419 19:01:51.151633   14960 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 95f17ad0-a494-47d2-bd48-3c12b32bd1ba
	I0419 19:01:51.151633   14960 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f36cf30-1c64-4a0a-8815-bff59746308d
	I0419 19:01:51.152472   14960 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2140"},"items":[{"metadata":{"name":"multinode-348000","uid":"2105e54f-4918-4d85-a755-a4b9dd447750","resourceVersion":"1905","creationTimestamp":"2024-04-20T01:35:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-348000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"910ae0f62f2dcf448782075db183a042c84a625e","minikube.k8s.io/name":"multinode-348000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_19T18_35_09_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15604 chars]
	I0419 19:01:51.153327   14960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 19:01:51.153436   14960 node_conditions.go:123] node cpu capacity is 2
	I0419 19:01:51.153436   14960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 19:01:51.153436   14960 node_conditions.go:123] node cpu capacity is 2
	I0419 19:01:51.153436   14960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 19:01:51.153436   14960 node_conditions.go:123] node cpu capacity is 2
	I0419 19:01:51.153436   14960 node_conditions.go:105] duration metric: took 141.1038ms to run NodePressure ...
	I0419 19:01:51.153436   14960 start.go:240] waiting for startup goroutines ...
	I0419 19:01:51.153542   14960 start.go:254] writing updated cluster config ...
	I0419 19:01:51.157851   14960 out.go:177] 
	I0419 19:01:51.160844   14960 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 19:01:51.169814   14960 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 19:01:51.169814   14960 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\config.json ...
	I0419 19:01:51.175642   14960 out.go:177] * Starting "multinode-348000-m03" worker node in "multinode-348000" cluster
	I0419 19:01:51.178973   14960 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 19:01:51.178973   14960 cache.go:56] Caching tarball of preloaded images
	I0419 19:01:51.179316   14960 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0419 19:01:51.179316   14960 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 19:01:51.179839   14960 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-348000\config.json ...
	I0419 19:01:51.188191   14960 start.go:360] acquireMachinesLock for multinode-348000-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 19:01:51.188191   14960 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-348000-m03"
	I0419 19:01:51.188191   14960 start.go:96] Skipping create...Using existing machine configuration
	I0419 19:01:51.188191   14960 fix.go:54] fixHost starting: m03
	I0419 19:01:51.188913   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m03 ).state
	I0419 19:01:53.263702   14960 main.go:141] libmachine: [stdout =====>] : Off
	
	I0419 19:01:53.264538   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:53.264538   14960 fix.go:112] recreateIfNeeded on multinode-348000-m03: state=Stopped err=<nil>
	W0419 19:01:53.264538   14960 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 19:01:53.267585   14960 out.go:177] * Restarting existing hyperv VM for "multinode-348000-m03" ...
	I0419 19:01:53.270855   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-348000-m03
	I0419 19:01:56.370046   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 19:01:56.370792   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:56.370792   14960 main.go:141] libmachine: Waiting for host to start...
	I0419 19:01:56.370792   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m03 ).state
	I0419 19:01:58.536828   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:01:58.536828   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:01:58.548256   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 19:02:01.029129   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 19:02:01.033553   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:02:02.037128   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m03 ).state
	I0419 19:02:04.119846   14960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 19:02:04.122288   14960 main.go:141] libmachine: [stderr =====>] : 
	I0419 19:02:04.122288   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 19:02:06.581337   14960 main.go:141] libmachine: [stdout =====>] : 
	I0419 19:02:06.581337   14960 main.go:141] libmachine: [stderr =====>] : 
	
	
	==> Docker <==
	Apr 20 01:59:09 multinode-348000 dockerd[1052]: 2024/04/20 01:59:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:12 multinode-348000 dockerd[1052]: 2024/04/20 01:59:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:13 multinode-348000 dockerd[1052]: 2024/04/20 01:59:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:16 multinode-348000 dockerd[1052]: 2024/04/20 01:59:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:16 multinode-348000 dockerd[1052]: 2024/04/20 01:59:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:16 multinode-348000 dockerd[1052]: 2024/04/20 01:59:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:16 multinode-348000 dockerd[1052]: 2024/04/20 01:59:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:16 multinode-348000 dockerd[1052]: 2024/04/20 01:59:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:17 multinode-348000 dockerd[1052]: 2024/04/20 01:59:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:17 multinode-348000 dockerd[1052]: 2024/04/20 01:59:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:17 multinode-348000 dockerd[1052]: 2024/04/20 01:59:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:17 multinode-348000 dockerd[1052]: 2024/04/20 01:59:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:17 multinode-348000 dockerd[1052]: 2024/04/20 01:59:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:17 multinode-348000 dockerd[1052]: 2024/04/20 01:59:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 20 01:59:17 multinode-348000 dockerd[1052]: 2024/04/20 01:59:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d608b74b0597f       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   75ff9f4e9dde2       busybox-fc5497c4f-xnz2k
	352cf21a3e202       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   f28a1e746a9b4       coredns-7db6d8ff4d-7w477
	c6f350bee7762       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       2                   5472c1fba3929       storage-provisioner
	ae0b21715f861       4950bb10b3f87                                                                                         3 minutes ago       Running             kindnet-cni               2                   b5a777eba295e       kindnet-s4fsr
	f8c798c994078       4950bb10b3f87                                                                                         4 minutes ago       Exited              kindnet-cni               1                   b5a777eba295e       kindnet-s4fsr
	45383c4290ad1       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       1                   5472c1fba3929       storage-provisioner
	e438af0f1ec9e       a0bf559e280cf                                                                                         4 minutes ago       Running             kube-proxy                1                   09f65a6953038       kube-proxy-kj76x
	2deabe4dbdf41       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      0                   ab9ff1d906880       etcd-multinode-348000
	bd3aa93bac25b       c42f13656d0b2                                                                                         4 minutes ago       Running             kube-apiserver            0                   d7052a6f04def       kube-apiserver-multinode-348000
	b67f2295d26ca       c7aad43836fa5                                                                                         4 minutes ago       Running             kube-controller-manager   1                   118cca57d1f54       kube-controller-manager-multinode-348000
	d57aee391c146       259c8277fcbbc                                                                                         4 minutes ago       Running             kube-scheduler            1                   e8baa597c1467       kube-scheduler-multinode-348000
	d8afb3e1fb946       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   476e3efb38684       busybox-fc5497c4f-xnz2k
	627b84abf45cd       cbb01a7bd410d                                                                                         26 minutes ago      Exited              coredns                   0                   2dd294415aae1       coredns-7db6d8ff4d-7w477
	a6586791413d0       a0bf559e280cf                                                                                         27 minutes ago      Exited              kube-proxy                0                   7935893e9f22a       kube-proxy-kj76x
	9638ddcd54285       c7aad43836fa5                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   6e420625b84be       kube-controller-manager-multinode-348000
	e476774b8f77e       259c8277fcbbc                                                                                         27 minutes ago      Exited              kube-scheduler            0                   e5d733991bf1a       kube-scheduler-multinode-348000
	
	
	==> coredns [352cf21a3e20] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 93714cfd58e203ac2baa48ea9c7b435951d2a9faed7a5c70b4e84c89c6c1fe4c1dfa41f14b3ebf0f5941dade673a82eaad960061e673dd78dcb856db3393b39d
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51206 - 14298 "HINFO IN 4972057462503628469.2167329557243878603. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028297062s
	
	
	==> coredns [627b84abf45c] <==
	[INFO] 10.244.0.3:35877 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000325701s
	[INFO] 10.244.0.3:53705 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000318601s
	[INFO] 10.244.0.3:40560 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164401s
	[INFO] 10.244.0.3:53239 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001239s
	[INFO] 10.244.0.3:39754 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001464s
	[INFO] 10.244.0.3:41397 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001668s
	[INFO] 10.244.0.3:49126 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001646s
	[INFO] 10.244.1.2:37850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115501s
	[INFO] 10.244.1.2:44063 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001443s
	[INFO] 10.244.1.2:39924 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000607s
	[INFO] 10.244.1.2:53244 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000622s
	[INFO] 10.244.0.3:52017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001879s
	[INFO] 10.244.0.3:55488 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000814s
	[INFO] 10.244.0.3:57536 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000778s
	[INFO] 10.244.0.3:45454 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001788s
	[INFO] 10.244.1.2:52247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001095s
	[INFO] 10.244.1.2:46954 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001143s
	[INFO] 10.244.1.2:47574 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098701s
	[INFO] 10.244.1.2:36658 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000170301s
	[INFO] 10.244.0.3:35421 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001002s
	[INFO] 10.244.0.3:41995 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132201s
	[INFO] 10.244.0.3:36431 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001956s
	[INFO] 10.244.0.3:38168 - 5 "PTR IN 1.32.19.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000222s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-348000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-348000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=multinode-348000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_19T18_35_09_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 01:35:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-348000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 02:02:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:35:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 01:58:40 +0000   Sat, 20 Apr 2024 01:58:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.42.24
	  Hostname:    multinode-348000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd21fc8af31a4161a4396c16b70a2fc3
	  System UUID:                fdc3fb6e-1818-9a4e-b496-b7ed0124a8e6
	  Boot ID:                    047b982b-9f97-4a1a-8f8a-a308f369753b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xnz2k                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-7db6d8ff4d-7w477                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-348000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m34s
	  kube-system                 kindnet-s4fsr                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-348000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  kube-system                 kube-controller-manager-multinode-348000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-kj76x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-348000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 27m                    kube-proxy       
	  Normal  Starting                 4m32s                  kube-proxy       
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-348000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-348000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-348000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           27m                    node-controller  Node multinode-348000 event: Registered Node multinode-348000 in Controller
	  Normal  NodeReady                26m                    kubelet          Node multinode-348000 status is now: NodeReady
	  Normal  Starting                 4m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m40s (x8 over 4m40s)  kubelet          Node multinode-348000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m40s (x8 over 4m40s)  kubelet          Node multinode-348000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m40s (x7 over 4m40s)  kubelet          Node multinode-348000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m22s                  node-controller  Node multinode-348000 event: Registered Node multinode-348000 in Controller
	
	
	Name:               multinode-348000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-348000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=multinode-348000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T19_01_40_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 02:01:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-348000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 02:02:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 02:01:48 +0000   Sat, 20 Apr 2024 02:01:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 02:01:48 +0000   Sat, 20 Apr 2024 02:01:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 02:01:48 +0000   Sat, 20 Apr 2024 02:01:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 02:01:48 +0000   Sat, 20 Apr 2024 02:01:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.47.34
	  Hostname:    multinode-348000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 76385953d6f14c2cb30200480a2cca6a
	  System UUID:                9f7972f9-8942-ef4f-b0cf-029b405f5832
	  Boot ID:                    d90398e6-85d9-4f91-92ec-6bf748903c5c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qnklj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kindnet-s98rh              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-proxy-bjv9b           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24m                kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x2 over 24m)  kubelet          Node multinode-348000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet          Node multinode-348000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x2 over 24m)  kubelet          Node multinode-348000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                23m                kubelet          Node multinode-348000-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  55s (x2 over 56s)  kubelet          Node multinode-348000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x2 over 56s)  kubelet          Node multinode-348000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x2 over 56s)  kubelet          Node multinode-348000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  55s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           52s                node-controller  Node multinode-348000-m02 event: Registered Node multinode-348000-m02 in Controller
	  Normal  NodeReady                47s                kubelet          Node multinode-348000-m02 status is now: NodeReady
	
	
	Name:               multinode-348000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-348000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=multinode-348000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T18_53_29_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 01:53:28 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-348000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 01:54:29 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 20 Apr 2024 01:53:36 +0000   Sat, 20 Apr 2024 01:55:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.19.37.59
	  Hostname:    multinode-348000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 02e45e9bf03f4852a443a43ac6a8538b
	  System UUID:                37a43d59-2157-6e44-8d13-6c975ea12fea
	  Boot ID:                    404bc64b-d4fc-4c63-a589-8191649bdfaa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-mg8qs       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-proxy-2jjsq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 9m3s                 kube-proxy       
	  Normal  Starting                 19m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)    kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)    kubelet          Node multinode-348000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)    kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                  kubelet          Node multinode-348000-m03 status is now: NodeReady
	  Normal  Starting                 9m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m7s (x2 over 9m7s)  kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m7s (x2 over 9m7s)  kubelet          Node multinode-348000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m7s (x2 over 9m7s)  kubelet          Node multinode-348000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m3s                 node-controller  Node multinode-348000-m03 event: Registered Node multinode-348000-m03 in Controller
	  Normal  NodeReady                8m59s                kubelet          Node multinode-348000-m03 status is now: NodeReady
	  Normal  NodeNotReady             7m22s                node-controller  Node multinode-348000-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           4m22s                node-controller  Node multinode-348000-m03 event: Registered Node multinode-348000-m03 in Controller
	
	
	==> dmesg <==
	              * this clock source is slow. Consider trying other clock sources
	[  +5.461945] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.733998] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.817887] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.031305] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr20 01:57] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.209815] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[ +26.622359] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	[  +0.115734] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.605928] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	[  +0.209234] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	[  +0.243987] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	[  +2.954231] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	[  +0.209781] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	[  +0.225214] systemd-fstab-generator[1255]: Ignoring "noauto" option for root device
	[  +0.313735] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	[  +0.929646] systemd-fstab-generator[1383]: Ignoring "noauto" option for root device
	[  +0.108494] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.650728] systemd-fstab-generator[1520]: Ignoring "noauto" option for root device
	[  +1.371725] kauditd_printk_skb: 49 callbacks suppressed
	[Apr20 01:58] kauditd_printk_skb: 25 callbacks suppressed
	[  +3.878920] systemd-fstab-generator[2324]: Ignoring "noauto" option for root device
	[  +7.552702] kauditd_printk_skb: 70 callbacks suppressed
	
	
	==> etcd [2deabe4dbdf4] <==
	{"level":"info","ts":"2024-04-20T01:57:57.260237Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-20T01:57:57.26046Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-20T01:57:57.264179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c switched to configuration voters=(5744930906065567852)"}
	{"level":"info","ts":"2024-04-20T01:57:57.264281Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","added-peer-id":"4fba18389b33806c","added-peer-peer-urls":["https://172.19.42.231:2380"]}
	{"level":"info","ts":"2024-04-20T01:57:57.264439Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dca2ede42d67bc1c","local-member-id":"4fba18389b33806c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:57:57.264612Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T01:57:57.271976Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-20T01:57:57.273753Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4fba18389b33806c","initial-advertise-peer-urls":["https://172.19.42.24:2380"],"listen-peer-urls":["https://172.19.42.24:2380"],"advertise-client-urls":["https://172.19.42.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.42.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-20T01:57:57.27526Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-20T01:57:57.27622Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.42.24:2380"}
	{"level":"info","ts":"2024-04-20T01:57:57.277207Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.42.24:2380"}
	{"level":"info","ts":"2024-04-20T01:57:58.988188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-20T01:57:58.988311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-20T01:57:58.988354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c received MsgPreVoteResp from 4fba18389b33806c at term 2"}
	{"level":"info","ts":"2024-04-20T01:57:58.988369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became candidate at term 3"}
	{"level":"info","ts":"2024-04-20T01:57:58.988376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c received MsgVoteResp from 4fba18389b33806c at term 3"}
	{"level":"info","ts":"2024-04-20T01:57:58.988386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4fba18389b33806c became leader at term 3"}
	{"level":"info","ts":"2024-04-20T01:57:58.988399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4fba18389b33806c elected leader 4fba18389b33806c at term 3"}
	{"level":"info","ts":"2024-04-20T01:57:58.994477Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4fba18389b33806c","local-member-attributes":"{Name:multinode-348000 ClientURLs:[https://172.19.42.24:2379]}","request-path":"/0/members/4fba18389b33806c/attributes","cluster-id":"dca2ede42d67bc1c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-20T01:57:58.994493Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:57:58.994512Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-20T01:57:58.996572Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-20T01:57:58.996617Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-20T01:57:58.999043Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.42.24:2379"}
	{"level":"info","ts":"2024-04-20T01:57:58.999341Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 02:02:35 up 6 min,  0 users,  load average: 0.41, 0.44, 0.21
	Linux multinode-348000 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ae0b21715f86] <==
	I0420 02:01:47.728816       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0420 02:01:57.735578       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0420 02:01:57.735676       1 main.go:227] handling current node
	I0420 02:01:57.735707       1 main.go:223] Handling node with IPs: map[172.19.47.34:{}]
	I0420 02:01:57.735715       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0420 02:01:57.736346       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0420 02:01:57.736449       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0420 02:02:07.752142       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0420 02:02:07.752187       1 main.go:227] handling current node
	I0420 02:02:07.752607       1 main.go:223] Handling node with IPs: map[172.19.47.34:{}]
	I0420 02:02:07.752625       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0420 02:02:07.752913       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0420 02:02:07.752944       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0420 02:02:17.759667       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0420 02:02:17.759781       1 main.go:227] handling current node
	I0420 02:02:17.760160       1 main.go:223] Handling node with IPs: map[172.19.47.34:{}]
	I0420 02:02:17.760197       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0420 02:02:17.760405       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0420 02:02:17.760517       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	I0420 02:02:27.772463       1 main.go:223] Handling node with IPs: map[172.19.42.24:{}]
	I0420 02:02:27.772588       1 main.go:227] handling current node
	I0420 02:02:27.772605       1 main.go:223] Handling node with IPs: map[172.19.47.34:{}]
	I0420 02:02:27.772613       1 main.go:250] Node multinode-348000-m02 has CIDR [10.244.1.0/24] 
	I0420 02:02:27.773247       1 main.go:223] Handling node with IPs: map[172.19.37.59:{}]
	I0420 02:02:27.773280       1 main.go:250] Node multinode-348000-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f8c798c99407] <==
	I0420 01:58:03.441751       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0420 01:58:03.511070       1 main.go:107] hostIP = 172.19.42.24
	podIP = 172.19.42.24
	I0420 01:58:03.513110       1 main.go:116] setting mtu 1500 for CNI 
	I0420 01:58:03.513147       1 main.go:146] kindnetd IP family: "ipv4"
	I0420 01:58:03.513182       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0420 01:58:07.011650       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0420 01:58:10.084231       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0420 01:58:13.156371       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0420 01:58:16.227521       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0420 01:58:19.299385       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [bd3aa93bac25] <==
	I0420 01:58:00.736531       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0420 01:58:00.737086       1 shared_informer.go:320] Caches are synced for configmaps
	I0420 01:58:00.737192       1 aggregator.go:165] initial CRD sync complete...
	I0420 01:58:00.737219       1 autoregister_controller.go:141] Starting autoregister controller
	I0420 01:58:00.737225       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0420 01:58:00.737230       1 cache.go:39] Caches are synced for autoregister controller
	I0420 01:58:00.740699       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0420 01:58:00.741004       1 policy_source.go:224] refreshing policies
	I0420 01:58:00.742672       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0420 01:58:00.747054       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0420 01:58:00.805770       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0420 01:58:00.807460       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0420 01:58:00.814456       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0420 01:58:00.814490       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0420 01:58:00.815844       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0420 01:58:01.612010       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0420 01:58:02.160618       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.42.231 172.19.42.24]
	I0420 01:58:02.163332       1 controller.go:615] quota admission added evaluator for: endpoints
	I0420 01:58:02.176968       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0420 01:58:03.430204       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0420 01:58:03.761410       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0420 01:58:03.780335       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0420 01:58:03.907022       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0420 01:58:03.924019       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0420 01:58:22.143512       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.42.24]
	
	
	==> kube-controller-manager [9638ddcd5428] <==
	I0420 01:35:39.265403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.862669ms"
	I0420 01:35:39.266023       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="552.786µs"
	I0420 01:38:18.575680       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m02\" does not exist"
	I0420 01:38:18.590900       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m02" podCIDRs=["10.244.1.0/24"]
	I0420 01:38:22.613051       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m02"
	I0420 01:38:37.669535       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0420 01:39:03.031296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.090021ms"
	I0420 01:39:03.053897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.363721ms"
	I0420 01:39:03.054543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.499µs"
	I0420 01:39:05.783927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.434034ms"
	I0420 01:39:05.784276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="108.901µs"
	I0420 01:39:07.103598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.163039ms"
	I0420 01:39:07.104054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.4µs"
	I0420 01:42:52.390190       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0420 01:42:52.390530       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m03\" does not exist"
	I0420 01:42:52.403944       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m03" podCIDRs=["10.244.2.0/24"]
	I0420 01:42:52.676079       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-348000-m03"
	I0420 01:43:11.211743       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0420 01:50:42.818871       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0420 01:53:22.621370       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0420 01:53:28.752017       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m03\" does not exist"
	I0420 01:53:28.753300       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0420 01:53:28.789161       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m03" podCIDRs=["10.244.3.0/24"]
	I0420 01:53:36.097701       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m03"
	I0420 01:55:13.205537       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	
	
	==> kube-controller-manager [b67f2295d26c] <==
	I0420 01:58:13.878534       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0420 01:58:40.290168       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0420 01:58:53.395955       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.694507ms"
	I0420 01:58:53.396146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.003µs"
	I0420 01:59:07.033370       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.713655ms"
	I0420 01:59:07.033533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.092µs"
	I0420 01:59:07.047220       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.391µs"
	I0420 01:59:07.121391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.338984ms"
	I0420 01:59:07.121503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.691µs"
	I0420 02:01:24.988778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.91234ms"
	I0420 02:01:24.990553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="1.649298ms"
	I0420 02:01:25.017701       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.100076ms"
	I0420 02:01:25.018394       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.1µs"
	I0420 02:01:40.067314       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-348000-m02\" does not exist"
	I0420 02:01:40.083483       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-348000-m02" podCIDRs=["10.244.1.0/24"]
	I0420 02:01:41.947480       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.2µs"
	I0420 02:01:48.970715       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-348000-m02"
	I0420 02:01:49.012061       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.901µs"
	I0420 02:01:53.061740       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.1µs"
	I0420 02:01:53.077568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.001µs"
	I0420 02:01:53.107139       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.8µs"
	I0420 02:01:53.189553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.201µs"
	I0420 02:01:53.207698       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.401µs"
	I0420 02:01:54.220435       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.073702ms"
	I0420 02:01:54.222127       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.5µs"
	
	
	==> kube-proxy [a6586791413d] <==
	I0420 01:35:26.120497       1 server_linux.go:69] "Using iptables proxy"
	I0420 01:35:26.156956       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.42.231"]
	I0420 01:35:26.208282       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 01:35:26.208472       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 01:35:26.208501       1 server_linux.go:165] "Using iptables Proxier"
	I0420 01:35:26.214693       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 01:35:26.216114       1 server.go:872] "Version info" version="v1.30.0"
	I0420 01:35:26.216181       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:35:26.219192       1 config.go:192] "Starting service config controller"
	I0420 01:35:26.219810       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 01:35:26.220079       1 config.go:101] "Starting endpoint slice config controller"
	I0420 01:35:26.220093       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 01:35:26.221802       1 config.go:319] "Starting node config controller"
	I0420 01:35:26.221980       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 01:35:26.320313       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 01:35:26.320380       1 shared_informer.go:320] Caches are synced for service config
	I0420 01:35:26.322323       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e438af0f1ec9] <==
	I0420 01:58:03.129201       1 server_linux.go:69] "Using iptables proxy"
	I0420 01:58:03.201631       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.42.24"]
	I0420 01:58:03.344058       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0420 01:58:03.344107       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0420 01:58:03.344137       1 server_linux.go:165] "Using iptables Proxier"
	I0420 01:58:03.353394       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 01:58:03.354462       1 server.go:872] "Version info" version="v1.30.0"
	I0420 01:58:03.354693       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:58:03.358325       1 config.go:192] "Starting service config controller"
	I0420 01:58:03.358366       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 01:58:03.358985       1 config.go:101] "Starting endpoint slice config controller"
	I0420 01:58:03.359176       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 01:58:03.358997       1 config.go:319] "Starting node config controller"
	I0420 01:58:03.368409       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 01:58:03.459372       1 shared_informer.go:320] Caches are synced for service config
	I0420 01:58:03.459745       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 01:58:03.470538       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d57aee391c14] <==
	I0420 01:57:58.020728       1 serving.go:380] Generated self-signed cert in-memory
	I0420 01:58:00.771749       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0420 01:58:00.771906       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:58:00.785599       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0420 01:58:00.785824       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0420 01:58:00.785929       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0420 01:58:00.785956       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0420 01:58:00.785972       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0420 01:58:00.786046       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0420 01:58:00.786323       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0420 01:58:00.786915       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0420 01:58:00.887091       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0420 01:58:00.887476       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0420 01:58:00.888293       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kube-scheduler [e476774b8f77] <==
	W0420 01:35:06.310265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0420 01:35:06.311126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0420 01:35:06.333128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0420 01:35:06.333531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0420 01:35:06.355993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0420 01:35:06.356053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0420 01:35:06.356154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0420 01:35:06.356365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0420 01:35:06.490128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0420 01:35:06.490240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0420 01:35:06.496247       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 01:35:06.496709       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 01:35:06.552817       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 01:35:06.552917       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 01:35:06.607496       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0420 01:35:06.607914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0420 01:35:06.608255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0420 01:35:06.608488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0420 01:35:06.623642       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0420 01:35:06.624029       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0420 01:35:09.746203       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0420 01:55:30.893306       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0420 01:55:30.893359       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0420 01:55:30.893732       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0420 01:55:30.894682       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 20 01:58:45 multinode-348000 kubelet[1526]: I0420 01:58:45.169759    1526 scope.go:117] "RemoveContainer" containerID="45383c4290ad1b9121fa9a9844eb6b8c813fa0a702d725dcc624b2c5e0936702"
	Apr 20 01:58:55 multinode-348000 kubelet[1526]: I0420 01:58:55.162183    1526 scope.go:117] "RemoveContainer" containerID="490377504e57c3189163833390967e79bb80d222691d4402677feb6f25ed22f4"
	Apr 20 01:58:55 multinode-348000 kubelet[1526]: I0420 01:58:55.206283    1526 scope.go:117] "RemoveContainer" containerID="53f6a00490766be2eb687e6fff052ca7a46ae16a0baf4551e956c81550d673b2"
	Apr 20 01:58:55 multinode-348000 kubelet[1526]: E0420 01:58:55.212558    1526 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:58:55 multinode-348000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:58:55 multinode-348000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:58:55 multinode-348000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:58:55 multinode-348000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 01:59:05 multinode-348000 kubelet[1526]: I0420 01:59:05.918992    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75ff9f4e9dde29a997e4321dd3659a2ce7d479a75826a78c4d3525f1eb5f696f"
	Apr 20 01:59:05 multinode-348000 kubelet[1526]: I0420 01:59:05.948376    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f28a1e746a9b438367a8e05d2e1a085afb4abec4174f7a7eb80549e02b95047a"
	Apr 20 01:59:55 multinode-348000 kubelet[1526]: E0420 01:59:55.210479    1526 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 01:59:55 multinode-348000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 01:59:55 multinode-348000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 01:59:55 multinode-348000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 01:59:55 multinode-348000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 02:00:55 multinode-348000 kubelet[1526]: E0420 02:00:55.210208    1526 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 02:00:55 multinode-348000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 02:00:55 multinode-348000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 02:00:55 multinode-348000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 02:00:55 multinode-348000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 20 02:01:55 multinode-348000 kubelet[1526]: E0420 02:01:55.209179    1526 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 20 02:01:55 multinode-348000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 20 02:01:55 multinode-348000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 20 02:01:55 multinode-348000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 20 02:01:55 multinode-348000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 19:02:25.887404    8032 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-348000 -n multinode-348000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-348000 -n multinode-348000: (11.618542s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-348000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (521.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (312.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-732500 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-732500 --driver=hyperv: exit status 1 (4m59.7656611s)

                                                
                                                
-- stdout --
	* [NoKubernetes-732500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-732500" primary control-plane node in "NoKubernetes-732500" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 19:19:23.478471    9136 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-732500 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-732500 -n NoKubernetes-732500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-732500 -n NoKubernetes-732500: exit status 6 (12.9241832s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 19:24:23.199989    9528 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0419 19:24:35.952861    9528 status.go:417] kubeconfig endpoint: get endpoint: "NoKubernetes-732500" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-732500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (312.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (10800.519s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-377700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=hyperv
panic: test timed out after 3h0m0s
running tests:
	TestNetworkPlugins (34m41s)
	TestNetworkPlugins/group/auto (10m54s)
	TestNetworkPlugins/group/calico (5m57s)
	TestNetworkPlugins/group/calico/Start (5m57s)
	TestNetworkPlugins/group/custom-flannel (31s)
	TestNetworkPlugins/group/custom-flannel/Start (31s)
	TestNetworkPlugins/group/kindnet (9m22s)
	TestStartStop (33m35s)

                                                
                                                
goroutine 2121 [running]:
testing.(*M).startAlarm.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 10 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0004f0680, 0xc00207dbb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
testing.runTests(0xc00080e0c0, {0x4d5c540, 0x2a, 0x2a}, {0x2a2835c?, 0x86806f?, 0x4d7f760?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000adbae0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000adbae0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000544d80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 396 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x39b5c00, 0xc000054300}, 0xc00022df50, 0xc00022df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x39b5c00, 0xc000054300}, 0x60?, 0xc00022df50, 0xc00022df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x39b5c00?, 0xc000054300?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x93e3a5?, 0xc0008f6f20?, 0xc00053c360?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 344
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 397 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 396
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 99 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 98
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 1969 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x39b5c00, 0xc000054300}, 0xc000a21f50, 0xc000a21f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x39b5c00, 0xc000054300}, 0x0?, 0xc000a21f50, 0xc000a21f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x39b5c00?, 0xc000054300?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x93e3a5?, 0xc00226a840?, 0xc000a0db00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2023
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1951 [chan receive, 4 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000af5300, 0xc000054300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1946
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 395 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc002380350, 0x37)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x24c4b80?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002088600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002380380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002198020, {0x3992200, 0xc002192930}, 0x1, 0xc000054300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002198020, 0x3b9aca00, 0x0, 0x1, 0xc000054300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 344
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1576 [chan receive, 35 minutes]:
testing.(*T).Run(0xc0027221a0, {0x29cc851?, 0x81f48d?}, 0xc00244c018)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0027221a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0027221a0, 0x343aeb8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1784 [chan receive, 35 minutes]:
testing.(*testContext).waitParallel(0xc000b86500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002190680)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002190680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002190680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc002190680, 0xc000af5400)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1783
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 246 [IO wait, 169 minutes]:
internal/poll.runtime_pollWait(0x1f861ff7c20, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00020d408?, 0x0?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0009ec7a0, 0xc0009afbb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc0009ec788, 0x378, {0xc0002490e0?, 0x0?, 0x0?}, 0xc00020d008?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc0009ec788, 0xc0009afd90)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc0009ec788)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc0008066a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0008066a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc000a180f0, {0x39a8ca0, 0xc0008066a0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc000a180f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0004f1040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 243
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 1671 [syscall, locked to thread]:
syscall.SyscallN(0x7ffd92424de0?, {0xc002093108?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x0?, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x694, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000573f20)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0008f7340)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0008f7340)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
os/exec.(*Cmd).CombinedOutput(0xc0008f7340)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:1012 +0x85
k8s.io/minikube/test/integration.debugLogs(0xc002722820, {0xc002470260, 0xb})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:594 +0x9de5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002722820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:211 +0xbcc
testing.tRunner(0xc002722820, 0xc0005dcf00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1670
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1950 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0020b3800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 1946
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1904 [syscall, 6 minutes, locked to thread]:
syscall.SyscallN(0x7ffd92424de0?, {0xc000a7bbd0?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x3f4, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc002be4660)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0013e0b00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0013e0b00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002392680, 0xc0013e0b00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc002392680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc002392680, 0xc0021e0150)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1679
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1785 [chan receive, 35 minutes]:
testing.(*testContext).waitParallel(0xc000b86500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002190820)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002190820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002190820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc002190820, 0xc000af5440)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1783
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 765 [chan send, 152 minutes]:
os/exec.(*Cmd).watchCtx(0xc00254cdc0, 0xc00053db60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 329
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1787 [chan receive, 35 minutes]:
testing.(*testContext).waitParallel(0xc000b86500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002190b60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002190b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002190b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc002190b60, 0xc000af54c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1783
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 701 [chan send, 156 minutes]:
os/exec.(*Cmd).watchCtx(0xc0013e1600, 0xc000a0d800)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 700
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 343 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002088720)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 360
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 344 [chan receive, 159 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002380380, 0xc000054300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 360
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1783 [chan receive, 35 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc002190340, 0x343b0d8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1613
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2116 [select]:
os/exec.(*Cmd).watchCtx(0xc002728000, 0xc000a0c180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2081
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2022 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002695920)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 1967
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1672 [chan receive, 35 minutes]:
testing.(*testContext).waitParallel(0xc000b86500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0027229c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0027229c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0027229c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0027229c0, 0xc0005dcf80)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1670
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1931 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000af48d0, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x24c4b80?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0020b36e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000af5300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0024368d0, {0x3992200, 0xc0023a0000}, 0x1, 0xc000054300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0024368d0, 0x3b9aca00, 0x0, 0x1, 0xc000054300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1951
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1670 [chan receive, 35 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc002722340, 0xc00244c018)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1576
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1955 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc0013e0b00, 0xc0023e8180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1904
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1954 [syscall, locked to thread]:
syscall.SyscallN(0x1f861d482c0?, {0xc000b77b20?, 0x7c7ea5?, 0x8?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x1f861d482c0?, 0xc000b77b80?, 0x7bfdd6?, 0x4e0cbc0?, 0xc000b77c08?, 0x7b2985?, 0x0?, 0x10000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x704, {0xc0020cebc6?, 0x543a, 0x86417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002485b88?, {0xc0020cebc6?, 0x7ec1be?, 0x10000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002485b88, {0xc0020cebc6, 0x543a, 0x543a})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000a061e0, {0xc0020cebc6?, 0x1f85c82da88?, 0x7e9e?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0021e0450, {0x3990dc0, 0xc0000a6490})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3990f00, 0xc0021e0450}, {0x3990dc0, 0xc0000a6490}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc000b77e78?, {0x3990f00, 0xc0021e0450})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4d10840?, {0x3990f00?, 0xc0021e0450?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3990f00, 0xc0021e0450}, {0x3990e80, 0xc000a061e0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0023e84e0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1904
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1613 [chan receive, 35 minutes]:
testing.(*T).Run(0xc002722b60, {0x29cc851?, 0x8f7333?}, 0x343b0d8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc002722b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc002722b60, 0x343af00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1788 [chan receive, 35 minutes]:
testing.(*testContext).waitParallel(0xc000b86500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002190d00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002190d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002190d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc002190d00, 0xc000af5500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1783
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2115 [syscall, locked to thread]:
syscall.SyscallN(0x1f862005f00?, {0xc0009b1b20?, 0x7c7ea5?, 0x4e0cbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x1f862005f59?, 0xc0009b1b80?, 0x7bfdd6?, 0x4e0cbc0?, 0xc0009b1c08?, 0x7b2985?, 0x1f85c820a28?, 0xc002987f67?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x764, {0xc002873cfc?, 0x304, 0x86417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc00277ea08?, {0xc002873cfc?, 0x7ec1be?, 0x2000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00277ea08, {0xc002873cfc, 0x304, 0x304})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6728, {0xc002873cfc?, 0xc0029536c0?, 0xe0a?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00277a0f0, {0x3990dc0, 0xc000a061d0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3990f00, 0xc00277a0f0}, {0x3990dc0, 0xc000a061d0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc0009b1e78?, {0x3990f00, 0xc00277a0f0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4d10840?, {0x3990f00?, 0xc00277a0f0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3990f00, 0xc00277a0f0}, {0x3990e80, 0xc0000a6728}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002402180?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2081
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1905 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x82ddc8?, {0xc0009f5b20?, 0x7c7ea5?, 0x4e0cbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x1f861ed3798?, 0xc0009f5b80?, 0x7bfdd6?, 0x4e0cbc0?, 0xc0009f5c08?, 0x7b281b?, 0x1f85c820a28?, 0x20035?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x640, {0xc000b6ea2b?, 0x5d5, 0x86417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002485408?, {0xc000b6ea2b?, 0x0?, 0x800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002485408, {0xc000b6ea2b, 0x5d5, 0x5d5})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000a061c8, {0xc000b6ea2b?, 0x1f861e67430?, 0x22a?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0021e0360, {0x3990dc0, 0xc000817a78})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3990f00, 0xc0021e0360}, {0x3990dc0, 0xc000817a78}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3990f00, 0xc0021e0360})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4d10840?, {0x3990f00?, 0xc0021e0360?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3990f00, 0xc0021e0360}, {0x3990e80, 0xc000a061c8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0022fcae0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1904
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1968 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000ad6890, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x24c4b80?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0026957a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000ad68c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007ccc10, {0x3992200, 0xc002332090}, 0x1, 0xc000054300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007ccc10, 0x3b9aca00, 0x0, 0x1, 0xc000054300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2023
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2012 [IO wait]:
internal/poll.runtime_pollWait(0x1f861ff7a30, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xd7b075e4e1a780f7?, 0xae33b1ccd70352a5?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0022f1920, 0x343bab0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).Read(0xc0022f1908, {0xc0021be000, 0x3500, 0x3500})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:436 +0x2b1
net.(*netFD).Read(0xc0022f1908, {0xc0021be000?, 0xc0009f1878?, 0x7c7ea5?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0000a7400, {0xc0021be000?, 0x7b201e?, 0x1f861e676b0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc00080e6f0, {0xc0021be000?, 0x0?, 0xc00080e6f0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0006769b0, {0x3992960, 0xc00080e6f0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc000676708, {0x1f8620c5360, 0xc00080e978}, 0xc0009f1980?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc000676708, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc000676708, {0xc0021ae000, 0x1000, 0x7e7a49?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc002352c00, {0xc00218a200, 0x9, 0xc0009f1d18?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3990fa0, 0xc002352c00}, {0xc00218a200, 0x9, 0x9}, 0x9)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:335 +0x90
io.ReadFull(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc00218a200, 0x9, 0xd50345?}, {0x3990fa0?, 0xc002352c00?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc00218a1c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/frame.go:498 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0009f1fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:2429 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0001f8300)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:2325 +0x65
created by golang.org/x/net/http2.(*ClientConn).goRun in goroutine 2011
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:369 +0x2d

                                                
                                                
goroutine 2023 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000ad68c0, 0xc000054300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1967
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1789 [chan receive, 35 minutes]:
testing.(*testContext).waitParallel(0xc000b86500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002190ea0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002190ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002190ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc002190ea0, 0xc000af5580)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1783
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1673 [chan receive, 35 minutes]:
testing.(*testContext).waitParallel(0xc000b86500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002722d00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002722d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002722d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002722d00, 0xc0005dd080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1670
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2106 [syscall, locked to thread]:
syscall.SyscallN(0x1f862029e20?, {0xc000b69b20?, 0x7c7ea5?, 0x4?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x6d?, 0xc000b69b80?, 0x7bfdd6?, 0x4e0cbc0?, 0xc000b69c08?, 0x7b281b?, 0x7a8ba6?, 0x8000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x714, {0xc0027ff93a?, 0x2c6, 0xc0027ff800?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0009eca08?, {0xc0027ff93a?, 0x7e5170?, 0x400?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0009eca08, {0xc0027ff93a, 0x2c6, 0x2c6})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0006242c0, {0xc0027ff93a?, 0xc000b69d98?, 0x13a?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002317ad0, {0x3990dc0, 0xc000a062b0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3990f00, 0xc002317ad0}, {0x3990dc0, 0xc000a062b0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3990f00, 0xc002317ad0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4d10840?, {0x3990f00?, 0xc002317ad0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3990f00, 0xc002317ad0}, {0x3990e80, 0xc0006242c0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000b69fa8?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1671
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1933 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1932
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1674 [chan receive, 35 minutes]:
testing.(*testContext).waitParallel(0xc000b86500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002723380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002723380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002723380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002723380, 0xc0005dd100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1670
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1675 [chan receive, 35 minutes]:
testing.(*testContext).waitParallel(0xc000b86500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002723520)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002723520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002723520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002723520, 0xc0005dd180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1670
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1676 [syscall, locked to thread]:
syscall.SyscallN(0x7ffd92424de0?, {0xc002377108?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x0?, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x41c, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc002435e90)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00278e580)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00278e580)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
os/exec.(*Cmd).CombinedOutput(0xc00278e580)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:1012 +0x85
k8s.io/minikube/test/integration.debugLogs(0xc0027236c0, {0xc002470610, 0xe})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:418 +0x3fe5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0027236c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:211 +0xbcc
testing.tRunner(0xc0027236c0, 0xc0005dd200)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1670
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1677 [chan receive, 35 minutes]:
testing.(*testContext).waitParallel(0xc000b86500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002723860)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002723860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002723860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002723860, 0xc0005dd280)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1670
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1678 [chan receive]:
testing.(*T).Run(0xc002723a00, {0x29cc856?, 0x398ad88?}, 0xc00277a000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002723a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc002723a00, 0xc0005dd300)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1670
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1679 [chan receive, 6 minutes]:
testing.(*T).Run(0xc002723ba0, {0x29cc856?, 0x398ad88?}, 0xc0021e0150)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002723ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc002723ba0, 0xc0005dd380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1670
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2114 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc002611b20?, 0x7c7ea5?, 0x4e0cbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc002611b4d?, 0xc002611b80?, 0x7bfdd6?, 0x4e0cbc0?, 0xc002611c08?, 0x7b2985?, 0x1f85c820a28?, 0x4d?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x73c, {0xc000b6e204?, 0x5fc, 0xc000b6e000?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc00277e288?, {0xc000b6e204?, 0x7ec171?, 0x800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00277e288, {0xc000b6e204, 0x5fc, 0x5fc})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0000a64f0, {0xc000b6e204?, 0xc002611d98?, 0x204?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00277a0c0, {0x3990dc0, 0xc000624020})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3990f00, 0xc00277a0c0}, {0x3990dc0, 0xc000624020}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3990f00, 0xc00277a0c0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4d10840?, {0x3990f00?, 0xc00277a0c0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3990f00, 0xc00277a0c0}, {0x3990e80, 0xc0000a64f0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2081
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1786 [chan receive, 35 minutes]:
testing.(*testContext).waitParallel(0xc000b86500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0021909c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0021909c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0021909c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0021909c0, 0xc000af5480)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1783
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2135 [syscall, locked to thread]:
syscall.SyscallN(0x1f861d47aa0?, {0xc0022a5b20?, 0x7c7ea5?, 0x4e0cbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x1f861d47a59?, 0xc0022a5b80?, 0x7bfdd6?, 0x4e0cbc0?, 0xc0022a5c08?, 0x7b281b?, 0x7a8ba6?, 0x35?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x798, {0xc0020ad53a?, 0x2c6, 0xc0020ad400?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0021ff408?, {0xc0020ad53a?, 0x7ec1be?, 0x400?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0021ff408, {0xc0020ad53a, 0x2c6, 0x2c6})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000a062c0, {0xc0020ad53a?, 0xc0022a5d98?, 0x13a?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0022e9ef0, {0x3990dc0, 0xc000624440})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3990f00, 0xc0022e9ef0}, {0x3990dc0, 0xc000624440}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3990f00, 0xc0022e9ef0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4d10840?, {0x3990f00?, 0xc0022e9ef0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3990f00, 0xc0022e9ef0}, {0x3990e80, 0xc000a062c0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0022fcc60?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1676
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1932 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x39b5c00, 0xc000054300}, 0xc0009a5f50, 0xc0009a5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x39b5c00, 0xc000054300}, 0xa0?, 0xc0009a5f50, 0xc0009a5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x39b5c00?, 0xc000054300?}, 0x0?, 0x8f7c60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0009a5fd0?, 0x93e404?, 0x343ae10?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1951
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2081 [syscall, locked to thread]:
syscall.SyscallN(0x7ffd92424de0?, {0xc000a1fbd0?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x72c, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0028a4600)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002728000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc002728000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0004f0820, 0xc002728000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0004f0820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc0004f0820, 0xc00277a000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1678
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2034 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1969
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                    

Test pass (151/195)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 19.67
4 TestDownloadOnly/v1.20.0/preload-exists 0.01
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.23
9 TestDownloadOnly/v1.20.0/DeleteAll 1.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.3
12 TestDownloadOnly/v1.30.0/json-events 11.11
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.21
18 TestDownloadOnly/v1.30.0/DeleteAll 1.18
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 1.41
21 TestBinaryMirror 7.31
22 TestOffline 287.8
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.22
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
28 TestCertOptions 487.95
29 TestCertExpiration 964.74
31 TestForceSystemdFlag 525.82
32 TestForceSystemdEnv 565.18
39 TestErrorSpam/start 16.58
40 TestErrorSpam/status 35.47
41 TestErrorSpam/pause 21.98
42 TestErrorSpam/unpause 22.16
43 TestErrorSpam/stop 54.59
46 TestFunctional/serial/CopySyncFile 0.04
47 TestFunctional/serial/StartWithProxy 208.27
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 125.76
50 TestFunctional/serial/KubeContext 0.14
51 TestFunctional/serial/KubectlGetPods 0.25
54 TestFunctional/serial/CacheCmd/cache/add_remote 25.35
55 TestFunctional/serial/CacheCmd/cache/add_local 10.76
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.18
57 TestFunctional/serial/CacheCmd/cache/list 0.19
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 8.99
59 TestFunctional/serial/CacheCmd/cache/cache_reload 35.28
60 TestFunctional/serial/CacheCmd/cache/delete 0.43
61 TestFunctional/serial/MinikubeKubectlCmd 0.49
63 TestFunctional/serial/ExtraConfig 127.8
64 TestFunctional/serial/ComponentHealth 0.19
65 TestFunctional/serial/LogsCmd 8.57
66 TestFunctional/serial/LogsFileCmd 10.72
67 TestFunctional/serial/InvalidService 21.09
73 TestFunctional/parallel/StatusCmd 42.47
77 TestFunctional/parallel/ServiceCmdConnect 34.72
78 TestFunctional/parallel/AddonsCmd 0.63
79 TestFunctional/parallel/PersistentVolumeClaim 43.75
81 TestFunctional/parallel/SSHCmd 21.47
82 TestFunctional/parallel/CpCmd 57.45
83 TestFunctional/parallel/MySQL 64.49
84 TestFunctional/parallel/FileSync 10.78
85 TestFunctional/parallel/CertSync 66.53
89 TestFunctional/parallel/NodeLabels 0.32
91 TestFunctional/parallel/NonActiveRuntimeDisabled 11.3
93 TestFunctional/parallel/License 3.14
94 TestFunctional/parallel/ServiceCmd/DeployApp 18.42
95 TestFunctional/parallel/ServiceCmd/List 13.28
96 TestFunctional/parallel/ServiceCmd/JSONOutput 12.77
97 TestFunctional/parallel/Version/short 0.24
98 TestFunctional/parallel/Version/components 9.56
99 TestFunctional/parallel/ImageCommands/ImageListShort 7.51
100 TestFunctional/parallel/ImageCommands/ImageListTable 7.37
101 TestFunctional/parallel/ImageCommands/ImageListJson 7.57
102 TestFunctional/parallel/ImageCommands/ImageListYaml 7.47
103 TestFunctional/parallel/ImageCommands/ImageBuild 26.9
104 TestFunctional/parallel/ImageCommands/Setup 4.67
106 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 26.33
107 TestFunctional/parallel/DockerEnv/powershell 48.65
110 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 22.03
111 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 29.17
112 TestFunctional/parallel/UpdateContextCmd/no_changes 2.41
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.38
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.44
115 TestFunctional/parallel/ImageCommands/ImageSaveToFile 10
116 TestFunctional/parallel/ProfileCmd/profile_not_create 11.75
117 TestFunctional/parallel/ImageCommands/ImageRemove 17.85
119 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 9.33
120 TestFunctional/parallel/ProfileCmd/profile_list 12.42
121 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.7
124 TestFunctional/parallel/ProfileCmd/profile_json_output 11.54
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 20.85
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 11.49
133 TestFunctional/delete_addon-resizer_images 0.5
134 TestFunctional/delete_my-image_image 0.2
135 TestFunctional/delete_minikube_cached_images 0.2
139 TestMultiControlPlane/serial/StartCluster 687.92
140 TestMultiControlPlane/serial/DeployApp 11.95
142 TestMultiControlPlane/serial/AddWorkerNode 249.72
143 TestMultiControlPlane/serial/NodeLabels 0.19
144 TestMultiControlPlane/serial/HAppyAfterClusterStart 27.34
145 TestMultiControlPlane/serial/CopyFile 609.77
146 TestMultiControlPlane/serial/StopSecondaryNode 71.87
147 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 20.58
151 TestImageBuild/serial/Setup 196.13
152 TestImageBuild/serial/NormalBuild 9.74
153 TestImageBuild/serial/BuildWithBuildArg 8.69
154 TestImageBuild/serial/BuildWithDockerIgnore 7.61
155 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.43
159 TestJSONOutput/start/Command 236.17
160 TestJSONOutput/start/Audit 0
162 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
165 TestJSONOutput/pause/Command 7.76
166 TestJSONOutput/pause/Audit 0
168 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/unpause/Command 7.48
172 TestJSONOutput/unpause/Audit 0
174 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/stop/Command 33.59
178 TestJSONOutput/stop/Audit 0
180 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
182 TestErrorJSONOutput 1.38
187 TestMainNoArgs 0.17
188 TestMinikubeProfile 518.7
191 TestMountStart/serial/StartWithMountFirst 150.48
192 TestMountStart/serial/VerifyMountFirst 9.17
193 TestMountStart/serial/StartWithMountSecond 150.38
194 TestMountStart/serial/VerifyMountSecond 9.04
195 TestMountStart/serial/DeleteFirst 26.5
196 TestMountStart/serial/VerifyMountPostDelete 9
197 TestMountStart/serial/Stop 28.87
201 TestMultiNode/serial/FreshStart2Nodes 414.54
202 TestMultiNode/serial/DeployApp2Nodes 9.58
204 TestMultiNode/serial/AddNode 221.49
205 TestMultiNode/serial/MultiNodeLabels 0.21
206 TestMultiNode/serial/ProfileList 11.97
207 TestMultiNode/serial/CopyFile 355.13
208 TestMultiNode/serial/StopNode 75
209 TestMultiNode/serial/StartAfterStop 182.27
214 TestPreload 513.01
215 TestScheduledStopWindows 325.56
220 TestRunningBinaryUpgrade 1201.54
222 TestKubernetesUpgrade 1229.53
225 TestNoKubernetes/serial/StartNoK8sWithVersion 0.28
246 TestPause/serial/Start 403.75
247 TestStoppedBinaryUpgrade/Setup 0.83
248 TestStoppedBinaryUpgrade/Upgrade 752.22
249 TestPause/serial/SecondStartNoReconfiguration 332.31
250 TestPause/serial/Pause 7.85
251 TestPause/serial/VerifyStatus 11.78
252 TestPause/serial/Unpause 7.49
253 TestPause/serial/PauseAgain 7.64
254 TestPause/serial/DeletePaused 44.79
255 TestPause/serial/VerifyDeletedResources 24.46
258 TestStoppedBinaryUpgrade/MinikubeLogs 9.15
x
+
TestDownloadOnly/v1.20.0/json-events (19.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-395100 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-395100 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (19.6697052s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (19.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-395100
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-395100: exit status 85 (233.0247ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-395100 | minikube1\jenkins | v1.33.0 | 19 Apr 24 16:58 PDT |          |
	|         | -p download-only-395100        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 16:58:51
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 16:58:51.704704   12264 out.go:291] Setting OutFile to fd 672 ...
	I0419 16:58:51.705061   12264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 16:58:51.705061   12264 out.go:304] Setting ErrFile to fd 676...
	I0419 16:58:51.705061   12264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0419 16:58:51.721201   12264 root.go:314] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0419 16:58:51.732763   12264 out.go:298] Setting JSON to true
	I0419 16:58:51.736324   12264 start.go:129] hostinfo: {"hostname":"minikube1","uptime":9590,"bootTime":1713561541,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0419 16:58:51.736324   12264 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 16:58:51.745144   12264 out.go:97] [download-only-395100] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0419 16:58:51.748134   12264 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	W0419 16:58:51.745495   12264 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0419 16:58:51.745495   12264 notify.go:220] Checking for updates...
	I0419 16:58:51.755720   12264 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0419 16:58:51.758953   12264 out.go:169] MINIKUBE_LOCATION=18703
	I0419 16:58:51.761790   12264 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0419 16:58:51.767018   12264 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0419 16:58:51.767816   12264 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 16:58:57.111274   12264 out.go:97] Using the hyperv driver based on user configuration
	I0419 16:58:57.111274   12264 start.go:297] selected driver: hyperv
	I0419 16:58:57.111274   12264 start.go:901] validating driver "hyperv" against <nil>
	I0419 16:58:57.111274   12264 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 16:58:57.168094   12264 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0419 16:58:57.169341   12264 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0419 16:58:57.169596   12264 cni.go:84] Creating CNI manager for ""
	I0419 16:58:57.169633   12264 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0419 16:58:57.169861   12264 start.go:340] cluster config:
	{Name:download-only-395100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-395100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 16:58:57.171081   12264 iso.go:125] acquiring lock: {Name:mk297f2abb67cbbcd36490c866afe693892d0c05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 16:58:57.175649   12264 out.go:97] Downloading VM boot image ...
	I0419 16:58:57.175955   12264 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.0-amd64.iso
	I0419 16:59:02.618044   12264 out.go:97] Starting "download-only-395100" primary control-plane node in "download-only-395100" cluster
	I0419 16:59:02.618044   12264 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0419 16:59:02.661382   12264 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0419 16:59:02.661382   12264 cache.go:56] Caching tarball of preloaded images
	I0419 16:59:02.661912   12264 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0419 16:59:02.664966   12264 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0419 16:59:02.665069   12264 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0419 16:59:02.729709   12264 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-395100 host does not exist
	  To start a cluster, run: "minikube start -p download-only-395100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 16:59:11.387825    2488 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1962011s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-395100
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-395100: (1.2975229s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (11.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-447900 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-447900 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperv: (11.1104127s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (11.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-447900
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-447900: exit status 85 (207.3258ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-395100 | minikube1\jenkins | v1.33.0 | 19 Apr 24 16:58 PDT |                     |
	|         | -p download-only-395100        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube1\jenkins | v1.33.0 | 19 Apr 24 16:59 PDT | 19 Apr 24 16:59 PDT |
	| delete  | -p download-only-395100        | download-only-395100 | minikube1\jenkins | v1.33.0 | 19 Apr 24 16:59 PDT | 19 Apr 24 16:59 PDT |
	| start   | -o=json --download-only        | download-only-447900 | minikube1\jenkins | v1.33.0 | 19 Apr 24 16:59 PDT |                     |
	|         | -p download-only-447900        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 16:59:14
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 16:59:14.113900   11424 out.go:291] Setting OutFile to fd 684 ...
	I0419 16:59:14.114390   11424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 16:59:14.114390   11424 out.go:304] Setting ErrFile to fd 720...
	I0419 16:59:14.114923   11424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 16:59:14.139499   11424 out.go:298] Setting JSON to true
	I0419 16:59:14.143322   11424 start.go:129] hostinfo: {"hostname":"minikube1","uptime":9612,"bootTime":1713561541,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0419 16:59:14.143322   11424 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 16:59:14.150129   11424 out.go:97] [download-only-447900] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0419 16:59:14.150603   11424 notify.go:220] Checking for updates...
	I0419 16:59:14.155652   11424 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 16:59:14.159762   11424 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0419 16:59:14.163318   11424 out.go:169] MINIKUBE_LOCATION=18703
	I0419 16:59:14.166785   11424 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0419 16:59:14.171216   11424 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0419 16:59:14.172260   11424 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 16:59:19.599754   11424 out.go:97] Using the hyperv driver based on user configuration
	I0419 16:59:19.599754   11424 start.go:297] selected driver: hyperv
	I0419 16:59:19.599754   11424 start.go:901] validating driver "hyperv" against <nil>
	I0419 16:59:19.600102   11424 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 16:59:19.650193   11424 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0419 16:59:19.650967   11424 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0419 16:59:19.650967   11424 cni.go:84] Creating CNI manager for ""
	I0419 16:59:19.650967   11424 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 16:59:19.650967   11424 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 16:59:19.651959   11424 start.go:340] cluster config:
	{Name:download-only-447900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-447900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 16:59:19.651959   11424 iso.go:125] acquiring lock: {Name:mk297f2abb67cbbcd36490c866afe693892d0c05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 16:59:19.655530   11424 out.go:97] Starting "download-only-447900" primary control-plane node in "download-only-447900" cluster
	I0419 16:59:19.655530   11424 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 16:59:19.696578   11424 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0419 16:59:19.696637   11424 cache.go:56] Caching tarball of preloaded images
	I0419 16:59:19.696637   11424 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 16:59:19.701000   11424 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0419 16:59:19.701000   11424 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0419 16:59:19.767395   11424 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4?checksum=md5:00b6acf85a82438f3897c0a6fafdcee7 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0419 16:59:22.979216   11424 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0419 16:59:22.979964   11424 preload.go:255] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0419 16:59:23.943663   11424 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 16:59:23.944653   11424 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-447900\config.json ...
	I0419 16:59:23.944653   11424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-447900\config.json: {Name:mk7224f8e1336434cb0288ab486d6175cf3bf180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 16:59:23.946017   11424 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 16:59:23.947711   11424 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\windows\amd64\v1.30.0/kubectl.exe
	
	
	* The control-plane node download-only-447900 host does not exist
	  To start a cluster, run: "minikube start -p download-only-447900"

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 16:59:25.228651    3636 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (1.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1816821s)
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (1.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (1.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-447900
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-447900: (1.4101714s)
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (1.41s)

                                                
                                    
x
+
TestBinaryMirror (7.31s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-147900 --alsologtostderr --binary-mirror http://127.0.0.1:51145 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-147900 --alsologtostderr --binary-mirror http://127.0.0.1:51145 --driver=hyperv: (6.383008s)
helpers_test.go:175: Cleaning up "binary-mirror-147900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-147900
--- PASS: TestBinaryMirror (7.31s)

                                                
                                    
x
+
TestOffline (287.8s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-732500 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-732500 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (4m1.6029624s)
helpers_test.go:175: Cleaning up "offline-docker-732500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-732500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-732500: (46.1927369s)
--- PASS: TestOffline (287.80s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-586600
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-586600: exit status 85 (220.2075ms)

                                                
                                                
-- stdout --
	* Profile "addons-586600" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-586600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 16:59:38.150413    8800 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-586600
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-586600: exit status 85 (208.5748ms)

                                                
                                                
-- stdout --
	* Profile "addons-586600" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-586600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 16:59:38.143418    6352 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestCertOptions (487.95s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-287900 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-287900 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (7m9.0846477s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-287900 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-287900 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (9.20166s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-287900 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-287900 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-287900 -- "sudo cat /etc/kubernetes/admin.conf": (9.1274615s)
helpers_test.go:175: Cleaning up "cert-options-287900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-287900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-287900: (40.3938931s)
--- PASS: TestCertOptions (487.95s)

                                                
                                    
x
+
TestCertExpiration (964.74s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-098300 --memory=2048 --cert-expiration=3m --driver=hyperv
E0419 19:25:44.603185    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 19:27:07.874144    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-098300 --memory=2048 --cert-expiration=3m --driver=hyperv: (8m57.7144222s)
E0419 19:35:44.597937    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-098300 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-098300 --memory=2048 --cert-expiration=8760h --driver=hyperv: (3m24.6052042s)
helpers_test.go:175: Cleaning up "cert-expiration-098300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-098300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-098300: (42.4007723s)
--- PASS: TestCertExpiration (964.74s)

                                                
                                    
x
+
TestForceSystemdFlag (525.82s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-732500 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-732500 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (7m49.3378885s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-732500 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-732500 ssh "docker info --format {{.CgroupDriver}}": (9.8332102s)
helpers_test.go:175: Cleaning up "force-systemd-flag-732500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-732500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-732500: (46.6414449s)
--- PASS: TestForceSystemdFlag (525.82s)

                                                
                                    
x
+
TestForceSystemdEnv (565.18s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-320900 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-320900 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (7m15.778787s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-320900 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-320900 ssh "docker info --format {{.CgroupDriver}}": (10.0234915s)
helpers_test.go:175: Cleaning up "force-systemd-env-320900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-320900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-320900: (1m59.3751642s)
--- PASS: TestForceSystemdEnv (565.18s)

                                                
                                    
x
+
TestErrorSpam/start (16.58s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 start --dry-run: (5.446206s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 start --dry-run: (5.5484934s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 start --dry-run: (5.5646865s)
--- PASS: TestErrorSpam/start (16.58s)

                                                
                                    
x
+
TestErrorSpam/status (35.47s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 status: (12.2565797s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 status: (11.5764526s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 status: (11.6008759s)
--- PASS: TestErrorSpam/status (35.47s)

                                                
                                    
x
+
TestErrorSpam/pause (21.98s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 pause: (7.4617986s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 pause: (7.1303063s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 pause: (7.3609157s)
--- PASS: TestErrorSpam/pause (21.98s)

                                                
                                    
x
+
TestErrorSpam/unpause (22.16s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 unpause: (7.4913211s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 unpause: (7.3360863s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 unpause: (7.3223427s)
--- PASS: TestErrorSpam/unpause (22.16s)

                                                
                                    
x
+
TestErrorSpam/stop (54.59s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 stop: (33.4779772s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 stop: (10.7530491s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-498400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-498400 stop: (10.3489095s)
--- PASS: TestErrorSpam/stop (54.59s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\3416\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (208.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-614300 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-614300 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m28.2457252s)
--- PASS: TestFunctional/serial/StartWithProxy (208.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (125.76s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-614300 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-614300 --alsologtostderr -v=8: (2m5.7429164s)
functional_test.go:659: soft start took 2m5.7600428s for "functional-614300" cluster.
--- PASS: TestFunctional/serial/SoftStart (125.76s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.14s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-614300 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (25.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 cache add registry.k8s.io/pause:3.1: (8.5894064s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 cache add registry.k8s.io/pause:3.3: (8.3582337s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 cache add registry.k8s.io/pause:latest: (8.3778743s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (25.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (10.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-614300 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3046165590\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-614300 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3046165590\001: (2.4436187s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 cache add minikube-local-cache-test:functional-614300
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 cache add minikube-local-cache-test:functional-614300: (7.8967559s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 cache delete minikube-local-cache-test:functional-614300
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-614300
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (10.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 ssh sudo crictl images: (8.9781552s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (35.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 ssh sudo docker rmi registry.k8s.io/pause:latest: (8.9049899s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-614300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.160148s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 17:16:55.365298    1296 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 cache reload: (7.9855157s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.2298963s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (35.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.43s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 kubectl -- --context functional-614300 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (127.8s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-614300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-614300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m7.7978748s)
functional_test.go:757: restart took 2m7.797917s for "functional-614300" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (127.80s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-614300 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 logs: (8.5651499s)
--- PASS: TestFunctional/serial/LogsCmd (8.57s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (10.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd419478774\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd419478774\001\logs.txt: (10.72021s)
--- PASS: TestFunctional/serial/LogsFileCmd (10.72s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (21.09s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-614300 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-614300
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-614300: exit status 115 (16.7260967s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://172.19.34.3:31271 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 17:20:27.029160    7156 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube_service_f513297bf07cd3fd942cead3a34f1b094af52476_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-614300 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (21.09s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (42.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 status: (13.4841015s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (14.2407977s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 status -o json: (14.7462599s)
--- PASS: TestFunctional/parallel/StatusCmd (42.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (34.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-614300 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-614300 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-k2rxn" [1da0bad8-f1f1-4531-9959-fbdb419eefd0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-k2rxn" [1da0bad8-f1f1-4531-9959-fbdb419eefd0] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 16.0182603s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 service hello-node-connect --url: (18.2880748s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.19.34.3:30521
functional_test.go:1671: http://172.19.34.3:30521: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-k2rxn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.19.34.3:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.19.34.3:30521
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (34.72s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [04f6a541-81e8-4d8a-bab2-51e0112a9d5c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0155408s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-614300 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-614300 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-614300 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-614300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [567ac455-6db4-41a1-a734-11a78e02c2de] Pending
helpers_test.go:344: "sp-pod" [567ac455-6db4-41a1-a734-11a78e02c2de] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [567ac455-6db4-41a1-a734-11a78e02c2de] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.0122287s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-614300 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-614300 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-614300 delete -f testdata/storage-provisioner/pod.yaml: (1.7905743s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-614300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8254c898-9334-4d51-90a7-14e1852edf69] Pending
helpers_test.go:344: "sp-pod" [8254c898-9334-4d51-90a7-14e1852edf69] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8254c898-9334-4d51-90a7-14e1852edf69] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0132794s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-614300 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.75s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (21.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 ssh "echo hello": (10.6810609s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 ssh "cat /etc/hostname": (10.7878775s)
--- PASS: TestFunctional/parallel/SSHCmd (21.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (57.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 cp testdata\cp-test.txt /home/docker/cp-test.txt: (7.7866262s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 ssh -n functional-614300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 ssh -n functional-614300 "sudo cat /home/docker/cp-test.txt": (9.9318988s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 cp functional-614300:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd2650053477\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 cp functional-614300:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd2650053477\001\cp-test.txt: (10.3728772s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 ssh -n functional-614300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 ssh -n functional-614300 "sudo cat /home/docker/cp-test.txt": (9.8856034s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.0156896s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 ssh -n functional-614300 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 ssh -n functional-614300 "sudo cat /tmp/does/not/exist/cp-test.txt": (11.4487443s)
--- PASS: TestFunctional/parallel/CpCmd (57.45s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (64.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-614300 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-f4fq6" [4a6366da-7b32-4059-a61e-88df79719f33] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-f4fq6" [4a6366da-7b32-4059-a61e-88df79719f33] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 52.0131375s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-614300 exec mysql-64454c8b5c-f4fq6 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-614300 exec mysql-64454c8b5c-f4fq6 -- mysql -ppassword -e "show databases;": exit status 1 (290.1058ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-614300 exec mysql-64454c8b5c-f4fq6 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-614300 exec mysql-64454c8b5c-f4fq6 -- mysql -ppassword -e "show databases;": exit status 1 (303.5523ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-614300 exec mysql-64454c8b5c-f4fq6 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-614300 exec mysql-64454c8b5c-f4fq6 -- mysql -ppassword -e "show databases;": exit status 1 (310.0231ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-614300 exec mysql-64454c8b5c-f4fq6 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-614300 exec mysql-64454c8b5c-f4fq6 -- mysql -ppassword -e "show databases;": exit status 1 (292.7306ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-614300 exec mysql-64454c8b5c-f4fq6 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (64.49s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (10.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/3416/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 ssh "sudo cat /etc/test/nested/copy/3416/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 ssh "sudo cat /etc/test/nested/copy/3416/hosts": (10.7748875s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (10.78s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (66.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/3416.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 ssh "sudo cat /etc/ssl/certs/3416.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 ssh "sudo cat /etc/ssl/certs/3416.pem": (11.0067975s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/3416.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 ssh "sudo cat /usr/share/ca-certificates/3416.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 ssh "sudo cat /usr/share/ca-certificates/3416.pem": (11.0320428s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 ssh "sudo cat /etc/ssl/certs/51391683.0": (11.5613526s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/34162.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 ssh "sudo cat /etc/ssl/certs/34162.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 ssh "sudo cat /etc/ssl/certs/34162.pem": (10.8723377s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/34162.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 ssh "sudo cat /usr/share/ca-certificates/34162.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 ssh "sudo cat /usr/share/ca-certificates/34162.pem": (11.4299735s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (10.6295275s)
--- PASS: TestFunctional/parallel/CertSync (66.53s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-614300 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (11.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-614300 ssh "sudo systemctl is-active crio": exit status 1 (11.2953602s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 17:21:24.414756   11808 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (11.30s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.1257936s)
--- PASS: TestFunctional/parallel/License (3.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (18.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-614300 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-614300 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-th5v5" [b3fb4fe3-b5c6-4925-83aa-f4a9aefeeaf2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-th5v5" [b3fb4fe3-b5c6-4925-83aa-f4a9aefeeaf2] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 18.0076808s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (18.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (13.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 service list: (13.2774299s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (13.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (12.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 service list -o json: (12.7718381s)
functional_test.go:1490: Took "12.7718381s" to run "out/minikube-windows-amd64.exe -p functional-614300 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (12.77s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 version --short
--- PASS: TestFunctional/parallel/Version/short (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (9.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 version -o=json --components: (9.5626183s)
--- PASS: TestFunctional/parallel/Version/components (9.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 image ls --format short --alsologtostderr: (7.5097941s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-614300 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-614300
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-614300
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-614300 image ls --format short --alsologtostderr:
W0419 17:23:50.830914    5320 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0419 17:23:50.839496    5320 out.go:291] Setting OutFile to fd 880 ...
I0419 17:23:50.855728    5320 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 17:23:50.855728    5320 out.go:304] Setting ErrFile to fd 716...
I0419 17:23:50.855728    5320 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 17:23:50.882374    5320 config.go:182] Loaded profile config "functional-614300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 17:23:50.882374    5320 config.go:182] Loaded profile config "functional-614300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 17:23:50.883799    5320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
I0419 17:23:53.069053    5320 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0419 17:23:53.069053    5320 main.go:141] libmachine: [stderr =====>] : 
I0419 17:23:53.084040    5320 ssh_runner.go:195] Run: systemctl --version
I0419 17:23:53.084040    5320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
I0419 17:23:55.335063    5320 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0419 17:23:55.335063    5320 main.go:141] libmachine: [stderr =====>] : 
I0419 17:23:55.335063    5320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
I0419 17:23:57.997746    5320 main.go:141] libmachine: [stdout =====>] : 172.19.34.3

                                                
                                                
I0419 17:23:57.997746    5320 main.go:141] libmachine: [stderr =====>] : 
I0419 17:23:57.997746    5320 sshutil.go:53] new ssh client: &{IP:172.19.34.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-614300\id_rsa Username:docker}
I0419 17:23:58.115807    5320 ssh_runner.go:235] Completed: systemctl --version: (5.0316775s)
I0419 17:23:58.128053    5320 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 image ls --format table --alsologtostderr: (7.3664962s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-614300 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-614300 | 41cf2157ab015 | 30B    |
| registry.k8s.io/kube-proxy                  | v1.30.0           | a0bf559e280cf | 84.7MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-apiserver              | v1.30.0           | c42f13656d0b2 | 117MB  |
| docker.io/library/nginx                     | latest            | 2ac752d7aeb1d | 188MB  |
| gcr.io/google-containers/addon-resizer      | functional-614300 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.30.0           | 259c8277fcbbc | 62MB   |
| registry.k8s.io/kube-controller-manager     | v1.30.0           | c7aad43836fa5 | 111MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/library/nginx                     | alpine            | 11d76b979f02d | 48.3MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-614300 image ls --format table --alsologtostderr:
W0419 17:24:05.878749    2480 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0419 17:24:05.885742    2480 out.go:291] Setting OutFile to fd 952 ...
I0419 17:24:05.886771    2480 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 17:24:05.886771    2480 out.go:304] Setting ErrFile to fd 920...
I0419 17:24:05.886771    2480 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 17:24:05.905237    2480 config.go:182] Loaded profile config "functional-614300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 17:24:05.906429    2480 config.go:182] Loaded profile config "functional-614300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 17:24:05.907115    2480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
I0419 17:24:08.095289    2480 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0419 17:24:08.095289    2480 main.go:141] libmachine: [stderr =====>] : 
I0419 17:24:08.112403    2480 ssh_runner.go:195] Run: systemctl --version
I0419 17:24:08.112403    2480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
I0419 17:24:10.323310    2480 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0419 17:24:10.323407    2480 main.go:141] libmachine: [stderr =====>] : 
I0419 17:24:10.323407    2480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
I0419 17:24:12.918285    2480 main.go:141] libmachine: [stdout =====>] : 172.19.34.3

                                                
                                                
I0419 17:24:12.918285    2480 main.go:141] libmachine: [stderr =====>] : 
I0419 17:24:12.918285    2480 sshutil.go:53] new ssh client: &{IP:172.19.34.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-614300\id_rsa Username:docker}
I0419 17:24:13.040806    2480 ssh_runner.go:235] Completed: systemctl --version: (4.9283913s)
I0419 17:24:13.052991    2480 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 image ls --format json --alsologtostderr: (7.5719448s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-614300 image ls --format json --alsologtostderr:
[{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"111000000"},{"id":"11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471d
f5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-614300"],"size":"32900000"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"62000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"41cf2157ab01516b62138d7de4ab81bc565d02c5c94df7b17b98cd16ef7a783e","repoDigests":[],"repoTags":["docker.io/li
brary/minikube-local-cache-test:functional-614300"],"size":"30"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"84700000"},{"id":"2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-614300 image ls --format json --alsologtostderr:
W0419 17:23:58.312171    2288 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0419 17:23:58.320181    2288 out.go:291] Setting OutFile to fd 936 ...
I0419 17:23:58.321170    2288 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 17:23:58.321170    2288 out.go:304] Setting ErrFile to fd 688...
I0419 17:23:58.321170    2288 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 17:23:58.337182    2288 config.go:182] Loaded profile config "functional-614300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 17:23:58.338184    2288 config.go:182] Loaded profile config "functional-614300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 17:23:58.339185    2288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
I0419 17:24:00.516059    2288 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0419 17:24:00.516059    2288 main.go:141] libmachine: [stderr =====>] : 
I0419 17:24:00.593125    2288 ssh_runner.go:195] Run: systemctl --version
I0419 17:24:00.593125    2288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
I0419 17:24:02.873803    2288 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0419 17:24:02.874236    2288 main.go:141] libmachine: [stderr =====>] : 
I0419 17:24:02.874324    2288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
I0419 17:24:05.543588    2288 main.go:141] libmachine: [stdout =====>] : 172.19.34.3

                                                
                                                
I0419 17:24:05.543662    2288 main.go:141] libmachine: [stderr =====>] : 
I0419 17:24:05.543808    2288 sshutil.go:53] new ssh client: &{IP:172.19.34.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-614300\id_rsa Username:docker}
I0419 17:24:05.661936    2288 ssh_runner.go:235] Completed: systemctl --version: (5.0687058s)
I0419 17:24:05.678479    2288 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 image ls --format yaml --alsologtostderr: (7.4718324s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-614300 image ls --format yaml --alsologtostderr:
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117000000"
- id: 2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-614300
size: "32900000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 41cf2157ab01516b62138d7de4ab81bc565d02c5c94df7b17b98cd16ef7a783e
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-614300
size: "30"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "111000000"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "62000000"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "84700000"
- id: 11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-614300 image ls --format yaml --alsologtostderr:
W0419 17:23:50.829866    4444 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0419 17:23:50.838215    4444 out.go:291] Setting OutFile to fd 952 ...
I0419 17:23:50.838825    4444 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 17:23:50.838825    4444 out.go:304] Setting ErrFile to fd 900...
I0419 17:23:50.838825    4444 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 17:23:50.857711    4444 config.go:182] Loaded profile config "functional-614300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 17:23:50.858056    4444 config.go:182] Loaded profile config "functional-614300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 17:23:50.859166    4444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
I0419 17:23:53.039546    4444 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0419 17:23:53.039546    4444 main.go:141] libmachine: [stderr =====>] : 
I0419 17:23:53.055050    4444 ssh_runner.go:195] Run: systemctl --version
I0419 17:23:53.055050    4444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
I0419 17:23:55.287337    4444 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0419 17:23:55.287405    4444 main.go:141] libmachine: [stderr =====>] : 
I0419 17:23:55.287405    4444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
I0419 17:23:57.964331    4444 main.go:141] libmachine: [stdout =====>] : 172.19.34.3

                                                
                                                
I0419 17:23:57.964331    4444 main.go:141] libmachine: [stderr =====>] : 
I0419 17:23:57.964331    4444 sshutil.go:53] new ssh client: &{IP:172.19.34.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-614300\id_rsa Username:docker}
I0419 17:23:58.074545    4444 ssh_runner.go:235] Completed: systemctl --version: (5.019483s)
I0419 17:23:58.085407    4444 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (26.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-614300 ssh pgrep buildkitd: exit status 1 (9.7005182s)

                                                
                                                
** stderr ** 
	W0419 17:23:58.343180   14560 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 image build -t localhost/my-image:functional-614300 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 image build -t localhost/my-image:functional-614300 testdata\build --alsologtostderr: (9.8323912s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-614300 image build -t localhost/my-image:functional-614300 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 47f4925e25d0
---> Removed intermediate container 47f4925e25d0
---> fe77d01c3ec6
Step 3/3 : ADD content.txt /
---> 4c7baa60692d
Successfully built 4c7baa60692d
Successfully tagged localhost/my-image:functional-614300
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-614300 image build -t localhost/my-image:functional-614300 testdata\build --alsologtostderr:
W0419 17:24:08.037952   10556 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0419 17:24:08.043968   10556 out.go:291] Setting OutFile to fd 936 ...
I0419 17:24:08.062508   10556 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 17:24:08.062508   10556 out.go:304] Setting ErrFile to fd 688...
I0419 17:24:08.062508   10556 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 17:24:08.084297   10556 config.go:182] Loaded profile config "functional-614300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 17:24:08.102683   10556 config.go:182] Loaded profile config "functional-614300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 17:24:08.104027   10556 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
I0419 17:24:10.323252   10556 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0419 17:24:10.323356   10556 main.go:141] libmachine: [stderr =====>] : 
I0419 17:24:10.338169   10556 ssh_runner.go:195] Run: systemctl --version
I0419 17:24:10.338169   10556 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-614300 ).state
I0419 17:24:12.516512   10556 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0419 17:24:12.517346   10556 main.go:141] libmachine: [stderr =====>] : 
I0419 17:24:12.517397   10556 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-614300 ).networkadapters[0]).ipaddresses[0]
I0419 17:24:15.159219   10556 main.go:141] libmachine: [stdout =====>] : 172.19.34.3

                                                
                                                
I0419 17:24:15.159219   10556 main.go:141] libmachine: [stderr =====>] : 
I0419 17:24:15.159559   10556 sshutil.go:53] new ssh client: &{IP:172.19.34.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-614300\id_rsa Username:docker}
I0419 17:24:15.263135   10556 ssh_runner.go:235] Completed: systemctl --version: (4.9249536s)
I0419 17:24:15.263330   10556 build_images.go:161] Building image from path: C:\Users\jenkins.minikube1\AppData\Local\Temp\build.1766363232.tar
I0419 17:24:15.276940   10556 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0419 17:24:15.314623   10556 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1766363232.tar
I0419 17:24:15.322650   10556 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1766363232.tar: stat -c "%s %y" /var/lib/minikube/build/build.1766363232.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1766363232.tar': No such file or directory
I0419 17:24:15.322903   10556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\AppData\Local\Temp\build.1766363232.tar --> /var/lib/minikube/build/build.1766363232.tar (3072 bytes)
I0419 17:24:15.430439   10556 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1766363232
I0419 17:24:15.466151   10556 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1766363232 -xf /var/lib/minikube/build/build.1766363232.tar
I0419 17:24:15.510276   10556 docker.go:360] Building image: /var/lib/minikube/build/build.1766363232
I0419 17:24:15.520150   10556 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-614300 /var/lib/minikube/build/build.1766363232
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0419 17:24:17.651403   10556 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-614300 /var/lib/minikube/build/build.1766363232: (2.1312482s)
I0419 17:24:17.664398   10556 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1766363232
I0419 17:24:17.701029   10556 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1766363232.tar
I0419 17:24:17.723002   10556 build_images.go:217] Built localhost/my-image:functional-614300 from C:\Users\jenkins.minikube1\AppData\Local\Temp\build.1766363232.tar
I0419 17:24:17.723070   10556 build_images.go:133] succeeded building to: functional-614300
I0419 17:24:17.723070   10556 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 image ls: (7.3685344s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (26.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.3626936s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-614300
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (26.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 image load --daemon gcr.io/google-containers/addon-resizer:functional-614300 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 image load --daemon gcr.io/google-containers/addon-resizer:functional-614300 --alsologtostderr: (17.8580622s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 image ls: (8.4689374s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (26.33s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (48.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-614300 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-614300"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-614300 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-614300": (31.4244881s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-614300 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-614300 docker-env | Invoke-Expression ; docker images": (17.2020152s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (48.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (22.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 image load --daemon gcr.io/google-containers/addon-resizer:functional-614300 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 image load --daemon gcr.io/google-containers/addon-resizer:functional-614300 --alsologtostderr: (13.6464401s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 image ls: (8.3826901s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (22.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (29.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (4.1445861s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-614300
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 image load --daemon gcr.io/google-containers/addon-resizer:functional-614300 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 image load --daemon gcr.io/google-containers/addon-resizer:functional-614300 --alsologtostderr: (16.4611708s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 image ls: (8.2912962s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (29.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 update-context --alsologtostderr -v=2: (2.413734s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 update-context --alsologtostderr -v=2: (2.3743037s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 update-context --alsologtostderr -v=2: (2.4345275s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (10s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 image save gcr.io/google-containers/addon-resizer:functional-614300 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 image save gcr.io/google-containers/addon-resizer:functional-614300 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (10.00404s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (10.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (11.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (11.2614582s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (11.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (17.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 image rm gcr.io/google-containers/addon-resizer:functional-614300 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 image rm gcr.io/google-containers/addon-resizer:functional-614300 --alsologtostderr: (8.8808674s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 image ls: (8.9651393s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (17.85s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-614300 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-614300 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-614300 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-614300 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 7472: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 9100: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (12.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (12.2206822s)
functional_test.go:1311: Took "12.221297s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "199.3625ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (12.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-614300 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-614300 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [84d4ecdb-485b-4b80-873e-bd6b3f33797a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [84d4ecdb-485b-4b80-873e-bd6b3f33797a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.0210749s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.70s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (11.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (11.3289557s)
functional_test.go:1362: Took "11.3295181s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "207.8649ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (11.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (20.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (11.2770182s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 image ls: (9.5677088s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (20.85s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-614300 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3304: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (11.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-614300
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-614300 image save --daemon gcr.io/google-containers/addon-resizer:functional-614300 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-614300 image save --daemon gcr.io/google-containers/addon-resizer:functional-614300 --alsologtostderr: (11.0539639s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-614300
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (11.49s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.5s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-614300
--- PASS: TestFunctional/delete_addon-resizer_images (0.50s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-614300
--- PASS: TestFunctional/delete_my-image_image (0.20s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-614300
--- PASS: TestFunctional/delete_minikube_cached_images (0.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (687.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-095800 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0419 17:30:44.587197    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 17:30:44.615689    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 17:30:44.625820    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 17:30:44.656849    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 17:30:44.697642    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 17:30:44.788524    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 17:30:44.959011    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 17:30:45.288423    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 17:30:45.942825    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 17:30:47.233211    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 17:30:49.797633    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 17:30:54.922240    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 17:31:05.173250    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 17:31:25.661880    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 17:32:06.637141    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 17:33:28.560527    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 17:35:44.589085    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 17:36:12.409270    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-095800 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (10m53.0385321s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr
E0419 17:40:44.582412    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr: (34.8792098s)
--- PASS: TestMultiControlPlane/serial/StartCluster (687.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (11.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-095800 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-095800 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-095800 -- rollout status deployment/busybox: (3.7778658s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-095800 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-095800 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-095800 -- exec busybox-fc5497c4f-dxkjp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-095800 -- exec busybox-fc5497c4f-dxkjp -- nslookup kubernetes.io: (1.9102753s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-095800 -- exec busybox-fc5497c4f-l275w -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-095800 -- exec busybox-fc5497c4f-tmxkg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-095800 -- exec busybox-fc5497c4f-tmxkg -- nslookup kubernetes.io: (1.5084179s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-095800 -- exec busybox-fc5497c4f-dxkjp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-095800 -- exec busybox-fc5497c4f-l275w -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-095800 -- exec busybox-fc5497c4f-tmxkg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-095800 -- exec busybox-fc5497c4f-dxkjp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-095800 -- exec busybox-fc5497c4f-l275w -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-095800 -- exec busybox-fc5497c4f-tmxkg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (11.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (249.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-095800 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-095800 -v=7 --alsologtostderr: (3m22.6934895s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr
E0419 17:45:44.583039    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr: (47.0249845s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (249.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-095800 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (27.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (27.3402064s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (27.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (609.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 status --output json -v=7 --alsologtostderr
E0419 17:47:07.795000    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 status --output json -v=7 --alsologtostderr: (47.0408115s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 cp testdata\cp-test.txt ha-095800:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 cp testdata\cp-test.txt ha-095800:/home/docker/cp-test.txt: (9.2570689s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800 "sudo cat /home/docker/cp-test.txt": (9.224762s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4282152140\001\cp-test_ha-095800.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4282152140\001\cp-test_ha-095800.txt: (9.2221963s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800 "sudo cat /home/docker/cp-test.txt": (9.30463s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800:/home/docker/cp-test.txt ha-095800-m02:/home/docker/cp-test_ha-095800_ha-095800-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800:/home/docker/cp-test.txt ha-095800-m02:/home/docker/cp-test_ha-095800_ha-095800-m02.txt: (16.1029376s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800 "sudo cat /home/docker/cp-test.txt": (9.0884203s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m02 "sudo cat /home/docker/cp-test_ha-095800_ha-095800-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m02 "sudo cat /home/docker/cp-test_ha-095800_ha-095800-m02.txt": (9.1735041s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800:/home/docker/cp-test.txt ha-095800-m03:/home/docker/cp-test_ha-095800_ha-095800-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800:/home/docker/cp-test.txt ha-095800-m03:/home/docker/cp-test_ha-095800_ha-095800-m03.txt: (16.2232077s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800 "sudo cat /home/docker/cp-test.txt": (9.4105811s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m03 "sudo cat /home/docker/cp-test_ha-095800_ha-095800-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m03 "sudo cat /home/docker/cp-test_ha-095800_ha-095800-m03.txt": (9.3397407s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800:/home/docker/cp-test.txt ha-095800-m04:/home/docker/cp-test_ha-095800_ha-095800-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800:/home/docker/cp-test.txt ha-095800-m04:/home/docker/cp-test_ha-095800_ha-095800-m04.txt: (16.1554665s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800 "sudo cat /home/docker/cp-test.txt": (9.2421808s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m04 "sudo cat /home/docker/cp-test_ha-095800_ha-095800-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m04 "sudo cat /home/docker/cp-test_ha-095800_ha-095800-m04.txt": (9.2942834s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 cp testdata\cp-test.txt ha-095800-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 cp testdata\cp-test.txt ha-095800-m02:/home/docker/cp-test.txt: (9.3293623s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m02 "sudo cat /home/docker/cp-test.txt": (9.1942784s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4282152140\001\cp-test_ha-095800-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4282152140\001\cp-test_ha-095800-m02.txt: (9.2204826s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m02 "sudo cat /home/docker/cp-test.txt": (9.2654682s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m02:/home/docker/cp-test.txt ha-095800:/home/docker/cp-test_ha-095800-m02_ha-095800.txt
E0419 17:50:44.587435    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m02:/home/docker/cp-test.txt ha-095800:/home/docker/cp-test_ha-095800-m02_ha-095800.txt: (16.1058517s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m02 "sudo cat /home/docker/cp-test.txt": (9.2186075s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800 "sudo cat /home/docker/cp-test_ha-095800-m02_ha-095800.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800 "sudo cat /home/docker/cp-test_ha-095800-m02_ha-095800.txt": (9.2441261s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m02:/home/docker/cp-test.txt ha-095800-m03:/home/docker/cp-test_ha-095800-m02_ha-095800-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m02:/home/docker/cp-test.txt ha-095800-m03:/home/docker/cp-test_ha-095800-m02_ha-095800-m03.txt: (16.1384548s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m02 "sudo cat /home/docker/cp-test.txt": (9.1795874s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m03 "sudo cat /home/docker/cp-test_ha-095800-m02_ha-095800-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m03 "sudo cat /home/docker/cp-test_ha-095800-m02_ha-095800-m03.txt": (9.2326324s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m02:/home/docker/cp-test.txt ha-095800-m04:/home/docker/cp-test_ha-095800-m02_ha-095800-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m02:/home/docker/cp-test.txt ha-095800-m04:/home/docker/cp-test_ha-095800-m02_ha-095800-m04.txt: (15.9980742s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m02 "sudo cat /home/docker/cp-test.txt": (9.2085856s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m04 "sudo cat /home/docker/cp-test_ha-095800-m02_ha-095800-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m04 "sudo cat /home/docker/cp-test_ha-095800-m02_ha-095800-m04.txt": (9.331238s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 cp testdata\cp-test.txt ha-095800-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 cp testdata\cp-test.txt ha-095800-m03:/home/docker/cp-test.txt: (9.2995305s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m03 "sudo cat /home/docker/cp-test.txt": (9.2047811s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4282152140\001\cp-test_ha-095800-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4282152140\001\cp-test_ha-095800-m03.txt: (9.1622406s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m03 "sudo cat /home/docker/cp-test.txt": (9.232314s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m03:/home/docker/cp-test.txt ha-095800:/home/docker/cp-test_ha-095800-m03_ha-095800.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m03:/home/docker/cp-test.txt ha-095800:/home/docker/cp-test_ha-095800-m03_ha-095800.txt: (16.1059567s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m03 "sudo cat /home/docker/cp-test.txt": (9.3469927s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800 "sudo cat /home/docker/cp-test_ha-095800-m03_ha-095800.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800 "sudo cat /home/docker/cp-test_ha-095800-m03_ha-095800.txt": (9.2597332s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m03:/home/docker/cp-test.txt ha-095800-m02:/home/docker/cp-test_ha-095800-m03_ha-095800-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m03:/home/docker/cp-test.txt ha-095800-m02:/home/docker/cp-test_ha-095800-m03_ha-095800-m02.txt: (16.0081106s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m03 "sudo cat /home/docker/cp-test.txt": (9.1893554s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m02 "sudo cat /home/docker/cp-test_ha-095800-m03_ha-095800-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m02 "sudo cat /home/docker/cp-test_ha-095800-m03_ha-095800-m02.txt": (9.1091037s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m03:/home/docker/cp-test.txt ha-095800-m04:/home/docker/cp-test_ha-095800-m03_ha-095800-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m03:/home/docker/cp-test.txt ha-095800-m04:/home/docker/cp-test_ha-095800-m03_ha-095800-m04.txt: (16.189799s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m03 "sudo cat /home/docker/cp-test.txt": (9.1800233s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m04 "sudo cat /home/docker/cp-test_ha-095800-m03_ha-095800-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m04 "sudo cat /home/docker/cp-test_ha-095800-m03_ha-095800-m04.txt": (9.1389464s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 cp testdata\cp-test.txt ha-095800-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 cp testdata\cp-test.txt ha-095800-m04:/home/docker/cp-test.txt: (9.1997988s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m04 "sudo cat /home/docker/cp-test.txt": (9.1684707s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4282152140\001\cp-test_ha-095800-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4282152140\001\cp-test_ha-095800-m04.txt: (9.2174474s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m04 "sudo cat /home/docker/cp-test.txt": (9.2423243s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m04:/home/docker/cp-test.txt ha-095800:/home/docker/cp-test_ha-095800-m04_ha-095800.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m04:/home/docker/cp-test.txt ha-095800:/home/docker/cp-test_ha-095800-m04_ha-095800.txt: (16.1020557s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m04 "sudo cat /home/docker/cp-test.txt"
E0419 17:55:44.587074    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m04 "sudo cat /home/docker/cp-test.txt": (9.1619579s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800 "sudo cat /home/docker/cp-test_ha-095800-m04_ha-095800.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800 "sudo cat /home/docker/cp-test_ha-095800-m04_ha-095800.txt": (9.1082181s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m04:/home/docker/cp-test.txt ha-095800-m02:/home/docker/cp-test_ha-095800-m04_ha-095800-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m04:/home/docker/cp-test.txt ha-095800-m02:/home/docker/cp-test_ha-095800-m04_ha-095800-m02.txt: (16.1605546s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m04 "sudo cat /home/docker/cp-test.txt": (9.2317409s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m02 "sudo cat /home/docker/cp-test_ha-095800-m04_ha-095800-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m02 "sudo cat /home/docker/cp-test_ha-095800-m04_ha-095800-m02.txt": (9.3587208s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m04:/home/docker/cp-test.txt ha-095800-m03:/home/docker/cp-test_ha-095800-m04_ha-095800-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 cp ha-095800-m04:/home/docker/cp-test.txt ha-095800-m03:/home/docker/cp-test_ha-095800-m04_ha-095800-m03.txt: (16.0152752s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m04 "sudo cat /home/docker/cp-test.txt": (9.1354981s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m03 "sudo cat /home/docker/cp-test_ha-095800-m04_ha-095800-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 ssh -n ha-095800-m03 "sudo cat /home/docker/cp-test_ha-095800-m04_ha-095800-m03.txt": (9.1662662s)
--- PASS: TestMultiControlPlane/serial/CopyFile (609.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (71.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-095800 node stop m02 -v=7 --alsologtostderr: (34.8121598s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-095800 status -v=7 --alsologtostderr: exit status 7 (37.0440441s)

                                                
                                                
-- stdout --
	ha-095800
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-095800-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-095800-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-095800-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 17:57:42.624985   14312 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0419 17:57:42.631933   14312 out.go:291] Setting OutFile to fd 976 ...
	I0419 17:57:42.633631   14312 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 17:57:42.633631   14312 out.go:304] Setting ErrFile to fd 864...
	I0419 17:57:42.633631   14312 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 17:57:42.652034   14312 out.go:298] Setting JSON to false
	I0419 17:57:42.652144   14312 mustload.go:65] Loading cluster: ha-095800
	I0419 17:57:42.652281   14312 notify.go:220] Checking for updates...
	I0419 17:57:42.654789   14312 config.go:182] Loaded profile config "ha-095800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:57:42.654789   14312 status.go:255] checking status of ha-095800 ...
	I0419 17:57:42.657572   14312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:57:44.788298   14312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:57:44.788705   14312 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:57:44.788705   14312 status.go:330] ha-095800 host status = "Running" (err=<nil>)
	I0419 17:57:44.788705   14312 host.go:66] Checking if "ha-095800" exists ...
	I0419 17:57:44.789648   14312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:57:46.946573   14312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:57:46.946573   14312 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:57:46.946573   14312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:57:49.519591   14312 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:57:49.519591   14312 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:57:49.519591   14312 host.go:66] Checking if "ha-095800" exists ...
	I0419 17:57:49.532660   14312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 17:57:49.532660   14312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800 ).state
	I0419 17:57:51.596978   14312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:57:51.609543   14312 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:57:51.609644   14312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800 ).networkadapters[0]).ipaddresses[0]
	I0419 17:57:54.187925   14312 main.go:141] libmachine: [stdout =====>] : 172.19.32.218
	
	I0419 17:57:54.195798   14312 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:57:54.195989   14312 sshutil.go:53] new ssh client: &{IP:172.19.32.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800\id_rsa Username:docker}
	I0419 17:57:54.293149   14312 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7604783s)
	I0419 17:57:54.307141   14312 ssh_runner.go:195] Run: systemctl --version
	I0419 17:57:54.332166   14312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 17:57:54.362461   14312 kubeconfig.go:125] found "ha-095800" server: "https://172.19.47.254:8443"
	I0419 17:57:54.362574   14312 api_server.go:166] Checking apiserver status ...
	I0419 17:57:54.377707   14312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 17:57:54.422995   14312 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1992/cgroup
	W0419 17:57:54.442832   14312 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1992/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 17:57:54.456974   14312 ssh_runner.go:195] Run: ls
	I0419 17:57:54.464134   14312 api_server.go:253] Checking apiserver healthz at https://172.19.47.254:8443/healthz ...
	I0419 17:57:54.474401   14312 api_server.go:279] https://172.19.47.254:8443/healthz returned 200:
	ok
	I0419 17:57:54.474401   14312 status.go:422] ha-095800 apiserver status = Running (err=<nil>)
	I0419 17:57:54.474401   14312 status.go:257] ha-095800 status: &{Name:ha-095800 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 17:57:54.475469   14312 status.go:255] checking status of ha-095800-m02 ...
	I0419 17:57:54.476233   14312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m02 ).state
	I0419 17:57:56.515163   14312 main.go:141] libmachine: [stdout =====>] : Off
	
	I0419 17:57:56.527404   14312 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:57:56.527404   14312 status.go:330] ha-095800-m02 host status = "Stopped" (err=<nil>)
	I0419 17:57:56.527404   14312 status.go:343] host is not running, skipping remaining checks
	I0419 17:57:56.527404   14312 status.go:257] ha-095800-m02 status: &{Name:ha-095800-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 17:57:56.527404   14312 status.go:255] checking status of ha-095800-m03 ...
	I0419 17:57:56.528226   14312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:57:58.616198   14312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:57:58.630025   14312 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:57:58.630025   14312 status.go:330] ha-095800-m03 host status = "Running" (err=<nil>)
	I0419 17:57:58.630025   14312 host.go:66] Checking if "ha-095800-m03" exists ...
	I0419 17:57:58.630814   14312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:58:00.739617   14312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:58:00.739617   14312 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:58:00.739617   14312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:58:03.299536   14312 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:58:03.299536   14312 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:58:03.300965   14312 host.go:66] Checking if "ha-095800-m03" exists ...
	I0419 17:58:03.317495   14312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 17:58:03.317495   14312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m03 ).state
	I0419 17:58:05.412725   14312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:58:05.412725   14312 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:58:05.424951   14312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m03 ).networkadapters[0]).ipaddresses[0]
	I0419 17:58:07.888491   14312 main.go:141] libmachine: [stdout =====>] : 172.19.47.152
	
	I0419 17:58:07.888491   14312 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:58:07.900905   14312 sshutil.go:53] new ssh client: &{IP:172.19.47.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m03\id_rsa Username:docker}
	I0419 17:58:08.010054   14312 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6924626s)
	I0419 17:58:08.023732   14312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 17:58:08.051466   14312 kubeconfig.go:125] found "ha-095800" server: "https://172.19.47.254:8443"
	I0419 17:58:08.051556   14312 api_server.go:166] Checking apiserver status ...
	I0419 17:58:08.064621   14312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 17:58:08.105053   14312 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2227/cgroup
	W0419 17:58:08.126949   14312 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2227/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 17:58:08.140529   14312 ssh_runner.go:195] Run: ls
	I0419 17:58:08.155591   14312 api_server.go:253] Checking apiserver healthz at https://172.19.47.254:8443/healthz ...
	I0419 17:58:08.162092   14312 api_server.go:279] https://172.19.47.254:8443/healthz returned 200:
	ok
	I0419 17:58:08.163746   14312 status.go:422] ha-095800-m03 apiserver status = Running (err=<nil>)
	I0419 17:58:08.163746   14312 status.go:257] ha-095800-m03 status: &{Name:ha-095800-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 17:58:08.163831   14312 status.go:255] checking status of ha-095800-m04 ...
	I0419 17:58:08.165068   14312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m04 ).state
	I0419 17:58:10.199711   14312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:58:10.199711   14312 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:58:10.199711   14312 status.go:330] ha-095800-m04 host status = "Running" (err=<nil>)
	I0419 17:58:10.212133   14312 host.go:66] Checking if "ha-095800-m04" exists ...
	I0419 17:58:10.213204   14312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m04 ).state
	I0419 17:58:12.294078   14312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:58:12.307039   14312 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:58:12.307039   14312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m04 ).networkadapters[0]).ipaddresses[0]
	I0419 17:58:14.815497   14312 main.go:141] libmachine: [stdout =====>] : 172.19.41.16
	
	I0419 17:58:14.815497   14312 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:58:14.815497   14312 host.go:66] Checking if "ha-095800-m04" exists ...
	I0419 17:58:14.842952   14312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 17:58:14.842952   14312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-095800-m04 ).state
	I0419 17:58:16.866163   14312 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 17:58:16.866163   14312 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:58:16.878253   14312 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-095800-m04 ).networkadapters[0]).ipaddresses[0]
	I0419 17:58:19.362347   14312 main.go:141] libmachine: [stdout =====>] : 172.19.41.16
	
	I0419 17:58:19.375394   14312 main.go:141] libmachine: [stderr =====>] : 
	I0419 17:58:19.375604   14312 sshutil.go:53] new ssh client: &{IP:172.19.41.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-095800-m04\id_rsa Username:docker}
	I0419 17:58:19.475308   14312 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6323452s)
	I0419 17:58:19.491231   14312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 17:58:19.519038   14312 status.go:257] ha-095800-m04 status: &{Name:ha-095800-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (71.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (20.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (20.5807127s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (20.58s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (196.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-162800 --driver=hyperv
E0419 18:03:47.803835    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 18:05:44.593836    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-162800 --driver=hyperv: (3m16.1251203s)
--- PASS: TestImageBuild/serial/Setup (196.13s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-162800
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-162800: (9.7279467s)
--- PASS: TestImageBuild/serial/NormalBuild (9.74s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-162800
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-162800: (8.6829611s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.69s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.61s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-162800
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-162800: (7.6125912s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.61s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.43s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-162800
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-162800: (7.4267352s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.43s)

                                                
                                    
x
+
TestJSONOutput/start/Command (236.17s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-535400 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0419 18:10:44.587678    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-535400 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m56.1546615s)
--- PASS: TestJSONOutput/start/Command (236.17s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-535400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-535400 --output=json --user=testUser: (7.7459188s)
--- PASS: TestJSONOutput/pause/Command (7.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.48s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-535400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-535400 --output=json --user=testUser: (7.479647s)
--- PASS: TestJSONOutput/unpause/Command (7.48s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (33.59s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-535400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-535400 --output=json --user=testUser: (33.5838908s)
--- PASS: TestJSONOutput/stop/Command (33.59s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.38s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-547100 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-547100 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (225.8115ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e89ae6b6-62e9-496c-9aca-4161c009bbd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-547100] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3943c012-36ad-43d3-ba3b-4417c0f5aff3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube1\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"55df4551-bc40-42a5-bf3c-280023262ad2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bb59d497-3be7-4c0f-8097-17bc4dfdbb86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"407453ea-93e3-4c15-adc9-79cfaea04b82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18703"}}
	{"specversion":"1.0","id":"2f1e1579-ac05-4a58-98ca-18d8d2563d89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"73273bd8-6077-4a04-9c3e-cfcc807e4d11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 18:12:57.971512    4004 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-547100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-547100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-547100: (1.1363605s)
--- PASS: TestErrorJSONOutput (1.38s)

                                                
                                    
x
+
TestMainNoArgs (0.17s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.17s)

                                                
                                    
x
+
TestMinikubeProfile (518.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-160200 --driver=hyperv
E0419 18:15:44.593616    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-160200 --driver=hyperv: (3m11.1181339s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-881400 --driver=hyperv
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-881400 --driver=hyperv: (3m17.2347383s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-160200
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (21.4105182s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-881400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (21.3986505s)
helpers_test.go:175: Cleaning up "second-881400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-881400
E0419 18:20:27.821240    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 18:20:44.590186    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-881400: (40.2151684s)
helpers_test.go:175: Cleaning up "first-160200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-160200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-160200: (46.5060995s)
--- PASS: TestMinikubeProfile (518.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (150.48s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-393300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-393300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m29.4632561s)
--- PASS: TestMountStart/serial/StartWithMountFirst (150.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.17s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-393300 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-393300 ssh -- ls /minikube-host: (9.1590501s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.17s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (150.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-393300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0419 18:25:44.595530    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-393300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m29.3766817s)
--- PASS: TestMountStart/serial/StartWithMountSecond (150.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.04s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-393300 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-393300 ssh -- ls /minikube-host: (9.0312386s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.04s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (26.5s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-393300 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-393300 --alsologtostderr -v=5: (26.4972614s)
--- PASS: TestMountStart/serial/DeleteFirst (26.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-393300 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-393300 ssh -- ls /minikube-host: (8.9979585s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.00s)

                                                
                                    
x
+
TestMountStart/serial/Stop (28.87s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-393300
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-393300: (28.8563744s)
--- PASS: TestMountStart/serial/Stop (28.87s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (414.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-348000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0419 18:35:44.598898    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
E0419 18:37:07.846280    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-348000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m31.463682s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 status --alsologtostderr: (23.077282s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (414.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-348000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-348000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-348000 -- rollout status deployment/busybox: (4.1808624s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-348000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-348000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-348000 -- exec busybox-fc5497c4f-2d5hs -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-348000 -- exec busybox-fc5497c4f-2d5hs -- nslookup kubernetes.io: (1.7569135s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-348000 -- exec busybox-fc5497c4f-xnz2k -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-348000 -- exec busybox-fc5497c4f-2d5hs -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-348000 -- exec busybox-fc5497c4f-xnz2k -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-348000 -- exec busybox-fc5497c4f-2d5hs -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-348000 -- exec busybox-fc5497c4f-xnz2k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.58s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (221.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-348000 -v 3 --alsologtostderr
E0419 18:40:44.591322    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-348000 -v 3 --alsologtostderr: (3m6.2041679s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 status --alsologtostderr: (35.286149s)
--- PASS: TestMultiNode/serial/AddNode (221.49s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-348000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (11.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (11.9654181s)
--- PASS: TestMultiNode/serial/ProfileList (11.97s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (355.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 status --output json --alsologtostderr: (35.6654772s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 cp testdata\cp-test.txt multinode-348000:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 cp testdata\cp-test.txt multinode-348000:/home/docker/cp-test.txt: (9.2518472s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000 "sudo cat /home/docker/cp-test.txt": (9.2234385s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 cp multinode-348000:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1378212137\001\cp-test_multinode-348000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 cp multinode-348000:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1378212137\001\cp-test_multinode-348000.txt: (9.2455444s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000 "sudo cat /home/docker/cp-test.txt": (9.304625s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 cp multinode-348000:/home/docker/cp-test.txt multinode-348000-m02:/home/docker/cp-test_multinode-348000_multinode-348000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 cp multinode-348000:/home/docker/cp-test.txt multinode-348000-m02:/home/docker/cp-test_multinode-348000_multinode-348000-m02.txt: (16.5632165s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000 "sudo cat /home/docker/cp-test.txt": (9.2653815s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m02 "sudo cat /home/docker/cp-test_multinode-348000_multinode-348000-m02.txt"
E0419 18:45:44.602603    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m02 "sudo cat /home/docker/cp-test_multinode-348000_multinode-348000-m02.txt": (9.3285328s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 cp multinode-348000:/home/docker/cp-test.txt multinode-348000-m03:/home/docker/cp-test_multinode-348000_multinode-348000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 cp multinode-348000:/home/docker/cp-test.txt multinode-348000-m03:/home/docker/cp-test_multinode-348000_multinode-348000-m03.txt: (16.1429228s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000 "sudo cat /home/docker/cp-test.txt": (9.2401085s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m03 "sudo cat /home/docker/cp-test_multinode-348000_multinode-348000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m03 "sudo cat /home/docker/cp-test_multinode-348000_multinode-348000-m03.txt": (9.2253611s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 cp testdata\cp-test.txt multinode-348000-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 cp testdata\cp-test.txt multinode-348000-m02:/home/docker/cp-test.txt: (9.2521751s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m02 "sudo cat /home/docker/cp-test.txt": (9.1264728s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 cp multinode-348000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1378212137\001\cp-test_multinode-348000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 cp multinode-348000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1378212137\001\cp-test_multinode-348000-m02.txt: (9.0047749s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m02 "sudo cat /home/docker/cp-test.txt": (9.1702974s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 cp multinode-348000-m02:/home/docker/cp-test.txt multinode-348000:/home/docker/cp-test_multinode-348000-m02_multinode-348000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 cp multinode-348000-m02:/home/docker/cp-test.txt multinode-348000:/home/docker/cp-test_multinode-348000-m02_multinode-348000.txt: (16.2263177s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m02 "sudo cat /home/docker/cp-test.txt": (9.3280438s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000 "sudo cat /home/docker/cp-test_multinode-348000-m02_multinode-348000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000 "sudo cat /home/docker/cp-test_multinode-348000-m02_multinode-348000.txt": (9.2281456s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 cp multinode-348000-m02:/home/docker/cp-test.txt multinode-348000-m03:/home/docker/cp-test_multinode-348000-m02_multinode-348000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 cp multinode-348000-m02:/home/docker/cp-test.txt multinode-348000-m03:/home/docker/cp-test_multinode-348000-m02_multinode-348000-m03.txt: (16.0382612s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m02 "sudo cat /home/docker/cp-test.txt": (9.203211s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m03 "sudo cat /home/docker/cp-test_multinode-348000-m02_multinode-348000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m03 "sudo cat /home/docker/cp-test_multinode-348000-m02_multinode-348000-m03.txt": (9.3248112s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 cp testdata\cp-test.txt multinode-348000-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 cp testdata\cp-test.txt multinode-348000-m03:/home/docker/cp-test.txt: (9.3869631s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m03 "sudo cat /home/docker/cp-test.txt": (9.5056726s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 cp multinode-348000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1378212137\001\cp-test_multinode-348000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 cp multinode-348000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1378212137\001\cp-test_multinode-348000-m03.txt: (9.2026492s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m03 "sudo cat /home/docker/cp-test.txt": (9.1538689s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 cp multinode-348000-m03:/home/docker/cp-test.txt multinode-348000:/home/docker/cp-test_multinode-348000-m03_multinode-348000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 cp multinode-348000-m03:/home/docker/cp-test.txt multinode-348000:/home/docker/cp-test_multinode-348000-m03_multinode-348000.txt: (16.0573522s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m03 "sudo cat /home/docker/cp-test.txt": (9.3108956s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000 "sudo cat /home/docker/cp-test_multinode-348000-m03_multinode-348000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000 "sudo cat /home/docker/cp-test_multinode-348000-m03_multinode-348000.txt": (9.2959441s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 cp multinode-348000-m03:/home/docker/cp-test.txt multinode-348000-m02:/home/docker/cp-test_multinode-348000-m03_multinode-348000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 cp multinode-348000-m03:/home/docker/cp-test.txt multinode-348000-m02:/home/docker/cp-test_multinode-348000-m03_multinode-348000-m02.txt: (16.2923216s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m03 "sudo cat /home/docker/cp-test.txt": (9.2378181s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m02 "sudo cat /home/docker/cp-test_multinode-348000-m03_multinode-348000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 ssh -n multinode-348000-m02 "sudo cat /home/docker/cp-test_multinode-348000-m03_multinode-348000-m02.txt": (9.3120007s)
--- PASS: TestMultiNode/serial/CopyFile (355.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 node stop m03: (24.0804957s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 status
E0419 18:50:44.603800    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-348000 status: exit status 7 (25.5024158s)

                                                
                                                
-- stdout --
	multinode-348000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-348000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-348000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 18:50:20.205061    6140 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-348000 status --alsologtostderr: exit status 7 (25.4163433s)

                                                
                                                
-- stdout --
	multinode-348000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-348000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-348000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 18:50:45.701637    8156 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0419 18:50:45.709606    8156 out.go:291] Setting OutFile to fd 976 ...
	I0419 18:50:45.709606    8156 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 18:50:45.709606    8156 out.go:304] Setting ErrFile to fd 616...
	I0419 18:50:45.709606    8156 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 18:50:45.721647    8156 out.go:298] Setting JSON to false
	I0419 18:50:45.721647    8156 mustload.go:65] Loading cluster: multinode-348000
	I0419 18:50:45.721647    8156 notify.go:220] Checking for updates...
	I0419 18:50:45.726342    8156 config.go:182] Loaded profile config "multinode-348000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 18:50:45.726342    8156 status.go:255] checking status of multinode-348000 ...
	I0419 18:50:45.727595    8156 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:50:47.860523    8156 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:50:47.860523    8156 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:50:47.860523    8156 status.go:330] multinode-348000 host status = "Running" (err=<nil>)
	I0419 18:50:47.860523    8156 host.go:66] Checking if "multinode-348000" exists ...
	I0419 18:50:47.861943    8156 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:50:50.018526    8156 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:50:50.018630    8156 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:50:50.018630    8156 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:50:52.517969    8156 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:50:52.518794    8156 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:50:52.518794    8156 host.go:66] Checking if "multinode-348000" exists ...
	I0419 18:50:52.532217    8156 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 18:50:52.532217    8156 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000 ).state
	I0419 18:50:54.617546    8156 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:50:54.617576    8156 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:50:54.617782    8156 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000 ).networkadapters[0]).ipaddresses[0]
	I0419 18:50:57.152819    8156 main.go:141] libmachine: [stdout =====>] : 172.19.42.231
	
	I0419 18:50:57.152819    8156 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:50:57.153233    8156 sshutil.go:53] new ssh client: &{IP:172.19.42.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000\id_rsa Username:docker}
	I0419 18:50:57.257435    8156 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7252073s)
	I0419 18:50:57.272320    8156 ssh_runner.go:195] Run: systemctl --version
	I0419 18:50:57.294800    8156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 18:50:57.321295    8156 kubeconfig.go:125] found "multinode-348000" server: "https://172.19.42.231:8443"
	I0419 18:50:57.321295    8156 api_server.go:166] Checking apiserver status ...
	I0419 18:50:57.338446    8156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 18:50:57.379316    8156 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2024/cgroup
	W0419 18:50:57.397742    8156 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2024/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 18:50:57.411311    8156 ssh_runner.go:195] Run: ls
	I0419 18:50:57.419661    8156 api_server.go:253] Checking apiserver healthz at https://172.19.42.231:8443/healthz ...
	I0419 18:50:57.426570    8156 api_server.go:279] https://172.19.42.231:8443/healthz returned 200:
	ok
	I0419 18:50:57.427627    8156 status.go:422] multinode-348000 apiserver status = Running (err=<nil>)
	I0419 18:50:57.427702    8156 status.go:257] multinode-348000 status: &{Name:multinode-348000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 18:50:57.427702    8156 status.go:255] checking status of multinode-348000-m02 ...
	I0419 18:50:57.428456    8156 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:50:59.473069    8156 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:50:59.473848    8156 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:50:59.473848    8156 status.go:330] multinode-348000-m02 host status = "Running" (err=<nil>)
	I0419 18:50:59.473848    8156 host.go:66] Checking if "multinode-348000-m02" exists ...
	I0419 18:50:59.474511    8156 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:51:01.593933    8156 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:51:01.593933    8156 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:51:01.594416    8156 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:51:04.099620    8156 main.go:141] libmachine: [stdout =====>] : 172.19.32.249
	
	I0419 18:51:04.100468    8156 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:51:04.100468    8156 host.go:66] Checking if "multinode-348000-m02" exists ...
	I0419 18:51:04.114380    8156 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 18:51:04.114380    8156 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m02 ).state
	I0419 18:51:06.213633    8156 main.go:141] libmachine: [stdout =====>] : Running
	
	I0419 18:51:06.213633    8156 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:51:06.213766    8156 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-348000-m02 ).networkadapters[0]).ipaddresses[0]
	I0419 18:51:08.751689    8156 main.go:141] libmachine: [stdout =====>] : 172.19.32.249
	
	I0419 18:51:08.751689    8156 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:51:08.751896    8156 sshutil.go:53] new ssh client: &{IP:172.19.32.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-348000-m02\id_rsa Username:docker}
	I0419 18:51:08.847490    8156 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7330372s)
	I0419 18:51:08.860895    8156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 18:51:08.885988    8156 status.go:257] multinode-348000-m02 status: &{Name:multinode-348000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0419 18:51:08.886043    8156 status.go:255] checking status of multinode-348000-m03 ...
	I0419 18:51:08.886637    8156 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-348000-m03 ).state
	I0419 18:51:10.974604    8156 main.go:141] libmachine: [stdout =====>] : Off
	
	I0419 18:51:10.974604    8156 main.go:141] libmachine: [stderr =====>] : 
	I0419 18:51:10.974708    8156 status.go:330] multinode-348000-m03 host status = "Stopped" (err=<nil>)
	I0419 18:51:10.974708    8156 status.go:343] host is not running, skipping remaining checks
	I0419 18:51:10.974708    8156 status.go:257] multinode-348000-m03 status: &{Name:multinode-348000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (75.00s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (182.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 node start m03 -v=7 --alsologtostderr: (2m26.9463282s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-348000 status -v=7 --alsologtostderr
E0419 18:53:47.858023    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-348000 status -v=7 --alsologtostderr: (35.1287083s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (182.27s)

                                                
                                    
x
+
TestPreload (513.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-377700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0419 19:05:44.601655    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-377700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m22.8563164s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-377700 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-377700 image pull gcr.io/k8s-minikube/busybox: (8.236472s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-377700
E0419 19:10:27.867818    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-377700: (38.5620198s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-377700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0419 19:10:44.602244    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-377700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m35.180805s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-377700 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-377700 image list: (7.0476206s)
helpers_test.go:175: Cleaning up "test-preload-377700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-377700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-377700: (41.0939303s)
--- PASS: TestPreload (513.01s)

                                                
                                    
x
+
TestScheduledStopWindows (325.56s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-227600 --memory=2048 --driver=hyperv
E0419 19:15:44.598712    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-227600 --memory=2048 --driver=hyperv: (3m10.97807s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-227600 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-227600 --schedule 5m: (10.4457988s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-227600 -n scheduled-stop-227600
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-227600 -n scheduled-stop-227600: exit status 1 (10.0221251s)

                                                
                                                
** stderr ** 
	W0419 19:17:19.047040    9224 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-227600 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-227600 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.0531147s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-227600 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-227600 --schedule 5s: (10.1990519s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-227600
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-227600: exit status 7 (2.1858836s)

                                                
                                                
-- stdout --
	scheduled-stop-227600
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 19:18:48.351109    7492 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-227600 -n scheduled-stop-227600
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-227600 -n scheduled-stop-227600: exit status 7 (2.1585772s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 19:18:50.541539   11404 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-227600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-227600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-227600: (30.4446258s)
--- PASS: TestScheduledStopWindows (325.56s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1201.54s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.1716460384.exe start -p running-upgrade-265900 --memory=2200 --vm-driver=hyperv
E0419 19:20:44.599535    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.1716460384.exe start -p running-upgrade-265900 --memory=2200 --vm-driver=hyperv: (10m5.8032599s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-265900 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0419 19:30:44.596256    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-265900 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (8m43.2375782s)
helpers_test.go:175: Cleaning up "running-upgrade-265900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-265900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-265900: (1m11.7040157s)
--- PASS: TestRunningBinaryUpgrade (1201.54s)

                                                
                                    
x
+
TestKubernetesUpgrade (1229.53s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-917600 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-917600 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (5m12.9605067s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-917600
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-917600: (33.9371415s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-917600 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-917600 status --format={{.Host}}: exit status 7 (2.3170464s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 19:43:38.567802    4328 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-917600 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv
E0419 19:43:47.895807    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-917600 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv: (6m26.7178604s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-917600 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-917600 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-917600 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (227.6114ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-917600] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 19:50:07.775018   10992 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-917600
	    minikube start -p kubernetes-upgrade-917600 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9176002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-917600 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-917600 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv
E0419 19:50:44.599832    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-917600 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv: (7m27.2895486s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-917600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-917600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-917600: (45.8893671s)
--- PASS: TestKubernetesUpgrade (1229.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-732500 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-732500 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (275.021ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-732500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 19:19:23.181373   10912 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.28s)

                                                
                                    
x
+
TestPause/serial/Start (403.75s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-435900 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-435900 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (6m43.7532639s)
--- PASS: TestPause/serial/Start (403.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (752.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.3892003497.exe start -p stopped-upgrade-320800 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.3892003497.exe start -p stopped-upgrade-320800 --memory=2200 --vm-driver=hyperv: (5m48.9768044s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.3892003497.exe -p stopped-upgrade-320800 stop
E0419 19:45:44.597515    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.3892003497.exe -p stopped-upgrade-320800 stop: (37.3206424s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-320800 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-320800 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m5.9053431s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (752.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (332.31s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-435900 --alsologtostderr -v=1 --driver=hyperv
E0419 19:40:44.609361    3416 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-614300\client.crt: The system cannot find the path specified.
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-435900 --alsologtostderr -v=1 --driver=hyperv: (5m32.2692724s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (332.31s)

                                                
                                    
x
+
TestPause/serial/Pause (7.85s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-435900 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-435900 --alsologtostderr -v=5: (7.8382221s)
--- PASS: TestPause/serial/Pause (7.85s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (11.78s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-435900 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-435900 --output=json --layout=cluster: exit status 2 (11.7692049s)

                                                
                                                
-- stdout --
	{"Name":"pause-435900","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-435900","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 19:46:19.822757   14428 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyStatus (11.78s)

                                                
                                    
x
+
TestPause/serial/Unpause (7.49s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-435900 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-435900 --alsologtostderr -v=5: (7.4920959s)
--- PASS: TestPause/serial/Unpause (7.49s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (7.64s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-435900 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-435900 --alsologtostderr -v=5: (7.6275536s)
--- PASS: TestPause/serial/PauseAgain (7.64s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (44.79s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-435900 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-435900 --alsologtostderr -v=5: (44.786176s)
--- PASS: TestPause/serial/DeletePaused (44.79s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (24.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (24.4530541s)
--- PASS: TestPause/serial/VerifyDeletedResources (24.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (9.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-320800
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-320800: (9.1385164s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (9.15s)

                                                
                                    

Test skip (29/195)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-614300 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-614300 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 6116: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-614300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-614300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0502136s)

                                                
                                                
-- stdout --
	* [functional-614300] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 17:23:25.077040    3740 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0419 17:23:25.078951    3740 out.go:291] Setting OutFile to fd 880 ...
	I0419 17:23:25.079893    3740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 17:23:25.079893    3740 out.go:304] Setting ErrFile to fd 936...
	I0419 17:23:25.079893    3740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 17:23:25.110461    3740 out.go:298] Setting JSON to false
	I0419 17:23:25.115469    3740 start.go:129] hostinfo: {"hostname":"minikube1","uptime":11063,"bootTime":1713561541,"procs":212,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0419 17:23:25.115469    3740 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 17:23:25.119457    3740 out.go:177] * [functional-614300] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0419 17:23:25.122457    3740 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 17:23:25.122457    3740 notify.go:220] Checking for updates...
	I0419 17:23:25.125452    3740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 17:23:25.128452    3740 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0419 17:23:25.130450    3740 out.go:177]   - MINIKUBE_LOCATION=18703
	I0419 17:23:25.132449    3740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 17:23:25.136452    3740 config.go:182] Loaded profile config "functional-614300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:23:25.137457    3740 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-614300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-614300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0268148s)

                                                
                                                
-- stdout --
	* [functional-614300] minikube v1.33.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0419 17:23:28.216633    4944 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0419 17:23:28.218541    4944 out.go:291] Setting OutFile to fd 988 ...
	I0419 17:23:28.218541    4944 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 17:23:28.219553    4944 out.go:304] Setting ErrFile to fd 880...
	I0419 17:23:28.219553    4944 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 17:23:28.250434    4944 out.go:298] Setting JSON to false
	I0419 17:23:28.255114    4944 start.go:129] hostinfo: {"hostname":"minikube1","uptime":11067,"bootTime":1713561541,"procs":211,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0419 17:23:28.255229    4944 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 17:23:28.262782    4944 out.go:177] * [functional-614300] minikube v1.33.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0419 17:23:28.265782    4944 notify.go:220] Checking for updates...
	I0419 17:23:28.268781    4944 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0419 17:23:28.270786    4944 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 17:23:28.276782    4944 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0419 17:23:28.279781    4944 out.go:177]   - MINIKUBE_LOCATION=18703
	I0419 17:23:28.282781    4944 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 17:23:28.285782    4944 config.go:182] Loaded profile config "functional-614300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 17:23:28.286780    4944 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard